Test Report: Docker_Linux_crio_arm64 17102

                    
                      38d5550e53f52b04c4b197c514428c4ecd9b2e1a:2023-08-21:30667
                    
                

Test fail (7/310)

Order failed test Duration
32 TestAddons/parallel/Ingress 170.13
86 TestFunctional/parallel/DashboardCmd 5.81
161 TestIngressAddonLegacy/serial/ValidateIngressAddons 184.79
211 TestMultiNode/serial/PingHostFrom2Pods 4.55
232 TestRunningBinaryUpgrade 71.68
235 TestMissingContainerUpgrade 145.63
247 TestStoppedBinaryUpgrade/Upgrade 93.01
x
+
TestAddons/parallel/Ingress (170.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-664125 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-664125 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-664125 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b79f0c94-0cf5-489e-9c81-3c8d3737b2bb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b79f0c94-0cf5-489e-9c81-3c8d3737b2bb] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.016845792s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p addons-664125 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-664125 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.598909426s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-664125 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:262: (dbg) Done: kubectl --context addons-664125 replace --force -f testdata/ingress-dns-example-v1.yaml: (1.05493071s)
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p addons-664125 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.053490924s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p addons-664125 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p addons-664125 addons disable ingress-dns --alsologtostderr -v=1: (1.816013393s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p addons-664125 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p addons-664125 addons disable ingress --alsologtostderr -v=1: (7.803405387s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-664125
helpers_test.go:235: (dbg) docker inspect addons-664125:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1daeaf756e17c69b43e64d7e3939dd90301d0e05d65073b51c2286e73786cb2b",
	        "Created": "2023-08-21T11:03:01.359550943Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2740974,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-21T11:03:01.698453869Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/1daeaf756e17c69b43e64d7e3939dd90301d0e05d65073b51c2286e73786cb2b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1daeaf756e17c69b43e64d7e3939dd90301d0e05d65073b51c2286e73786cb2b/hostname",
	        "HostsPath": "/var/lib/docker/containers/1daeaf756e17c69b43e64d7e3939dd90301d0e05d65073b51c2286e73786cb2b/hosts",
	        "LogPath": "/var/lib/docker/containers/1daeaf756e17c69b43e64d7e3939dd90301d0e05d65073b51c2286e73786cb2b/1daeaf756e17c69b43e64d7e3939dd90301d0e05d65073b51c2286e73786cb2b-json.log",
	        "Name": "/addons-664125",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-664125:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-664125",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bd8ba38645f7fcd688f8b8c469188586cc38b70cff9331c7a1fb3fd0c178b485-init/diff:/var/lib/docker/overlay2/26861af3348249541ea382b8036362f60ea7ec122121fce2bcb8576e1879b2cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bd8ba38645f7fcd688f8b8c469188586cc38b70cff9331c7a1fb3fd0c178b485/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bd8ba38645f7fcd688f8b8c469188586cc38b70cff9331c7a1fb3fd0c178b485/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bd8ba38645f7fcd688f8b8c469188586cc38b70cff9331c7a1fb3fd0c178b485/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-664125",
	                "Source": "/var/lib/docker/volumes/addons-664125/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-664125",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-664125",
	                "name.minikube.sigs.k8s.io": "addons-664125",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6f44c1d817561e55290f1efec73e77949ec077c8b9284f94d12f69c052421d52",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36188"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36187"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36184"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36186"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36185"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6f44c1d81756",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-664125": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1daeaf756e17",
	                        "addons-664125"
	                    ],
	                    "NetworkID": "283159e05094fb18daad881553e71b0df9569da1aeea7d8c6e122fda45ec0eaf",
	                    "EndpointID": "d7ffcc24099125ffd09f5051ad775706d59e793a7d0fd20b6ffe4ae0a754009e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-664125 -n addons-664125
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-664125 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-664125 logs -n 25: (1.571279652s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-658925   | jenkins | v1.31.2 | 21 Aug 23 11:01 UTC |                     |
	|         | -p download-only-658925           |                        |         |         |                     |                     |
	|         | --force --alsologtostderr         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-658925   | jenkins | v1.31.2 | 21 Aug 23 11:02 UTC |                     |
	|         | -p download-only-658925           |                        |         |         |                     |                     |
	|         | --force --alsologtostderr         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.4      |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| start   | -o=json --download-only           | download-only-658925   | jenkins | v1.31.2 | 21 Aug 23 11:02 UTC |                     |
	|         | -p download-only-658925           |                        |         |         |                     |                     |
	|         | --force --alsologtostderr         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.0-rc.1 |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| delete  | --all                             | minikube               | jenkins | v1.31.2 | 21 Aug 23 11:02 UTC | 21 Aug 23 11:02 UTC |
	| delete  | -p download-only-658925           | download-only-658925   | jenkins | v1.31.2 | 21 Aug 23 11:02 UTC | 21 Aug 23 11:02 UTC |
	| delete  | -p download-only-658925           | download-only-658925   | jenkins | v1.31.2 | 21 Aug 23 11:02 UTC | 21 Aug 23 11:02 UTC |
	| start   | --download-only -p                | download-docker-910339 | jenkins | v1.31.2 | 21 Aug 23 11:02 UTC |                     |
	|         | download-docker-910339            |                        |         |         |                     |                     |
	|         | --alsologtostderr                 |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| delete  | -p download-docker-910339         | download-docker-910339 | jenkins | v1.31.2 | 21 Aug 23 11:02 UTC | 21 Aug 23 11:02 UTC |
	| start   | --download-only -p                | binary-mirror-393918   | jenkins | v1.31.2 | 21 Aug 23 11:02 UTC |                     |
	|         | binary-mirror-393918              |                        |         |         |                     |                     |
	|         | --alsologtostderr                 |                        |         |         |                     |                     |
	|         | --binary-mirror                   |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38977            |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-393918           | binary-mirror-393918   | jenkins | v1.31.2 | 21 Aug 23 11:02 UTC | 21 Aug 23 11:02 UTC |
	| start   | -p addons-664125                  | addons-664125          | jenkins | v1.31.2 | 21 Aug 23 11:02 UTC | 21 Aug 23 11:05 UTC |
	|         | --wait=true --memory=4000         |                        |         |         |                     |                     |
	|         | --alsologtostderr                 |                        |         |         |                     |                     |
	|         | --addons=registry                 |                        |         |         |                     |                     |
	|         | --addons=metrics-server           |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots          |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver      |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                 |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner            |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget         |                        |         |         |                     |                     |
	|         | --driver=docker                   |                        |         |         |                     |                     |
	|         | --container-runtime=crio          |                        |         |         |                     |                     |
	|         | --addons=ingress                  |                        |         |         |                     |                     |
	|         | --addons=ingress-dns              |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p          | addons-664125          | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	|         | addons-664125                     |                        |         |         |                     |                     |
	| addons  | enable headlamp                   | addons-664125          | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	|         | -p addons-664125                  |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| ip      | addons-664125 ip                  | addons-664125          | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	| addons  | addons-664125 addons disable      | addons-664125          | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC | 21 Aug 23 11:05 UTC |
	|         | registry --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                              |                        |         |         |                     |                     |
	| ssh     | addons-664125 ssh curl -s         | addons-664125          | jenkins | v1.31.2 | 21 Aug 23 11:05 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:       |                        |         |         |                     |                     |
	|         | nginx.example.com'                |                        |         |         |                     |                     |
	| addons  | addons-664125 addons              | addons-664125          | jenkins | v1.31.2 | 21 Aug 23 11:06 UTC | 21 Aug 23 11:06 UTC |
	|         | disable csi-hostpath-driver       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| addons  | addons-664125 addons              | addons-664125          | jenkins | v1.31.2 | 21 Aug 23 11:06 UTC | 21 Aug 23 11:06 UTC |
	|         | disable volumesnapshots           |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| addons  | addons-664125 addons              | addons-664125          | jenkins | v1.31.2 | 21 Aug 23 11:06 UTC | 21 Aug 23 11:06 UTC |
	|         | disable metrics-server            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1            |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p       | addons-664125          | jenkins | v1.31.2 | 21 Aug 23 11:06 UTC | 21 Aug 23 11:06 UTC |
	|         | addons-664125                     |                        |         |         |                     |                     |
	| ip      | addons-664125 ip                  | addons-664125          | jenkins | v1.31.2 | 21 Aug 23 11:08 UTC | 21 Aug 23 11:08 UTC |
	| addons  | addons-664125 addons disable      | addons-664125          | jenkins | v1.31.2 | 21 Aug 23 11:08 UTC | 21 Aug 23 11:08 UTC |
	|         | ingress-dns --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                              |                        |         |         |                     |                     |
	| addons  | addons-664125 addons disable      | addons-664125          | jenkins | v1.31.2 | 21 Aug 23 11:08 UTC | 21 Aug 23 11:08 UTC |
	|         | ingress --alsologtostderr -v=1    |                        |         |         |                     |                     |
	|---------|-----------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 11:02:37
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 11:02:37.518903 2740506 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:02:37.519031 2740506 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:02:37.519040 2740506 out.go:309] Setting ErrFile to fd 2...
	I0821 11:02:37.519046 2740506 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:02:37.519270 2740506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
	I0821 11:02:37.519670 2740506 out.go:303] Setting JSON to false
	I0821 11:02:37.520590 2740506 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":71101,"bootTime":1692544656,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0821 11:02:37.520648 2740506 start.go:138] virtualization:  
	I0821 11:02:37.523676 2740506 out.go:177] * [addons-664125] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0821 11:02:37.526302 2740506 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 11:02:37.528240 2740506 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 11:02:37.526459 2740506 notify.go:220] Checking for updates...
	I0821 11:02:37.532446 2740506 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:02:37.534354 2740506 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	I0821 11:02:37.536476 2740506 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0821 11:02:37.538461 2740506 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 11:02:37.540744 2740506 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 11:02:37.565285 2740506 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 11:02:37.565382 2740506 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:02:37.651975 2740506 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-21 11:02:37.642713408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:02:37.652081 2740506 docker.go:294] overlay module found
	I0821 11:02:37.654549 2740506 out.go:177] * Using the docker driver based on user configuration
	I0821 11:02:37.656332 2740506 start.go:298] selected driver: docker
	I0821 11:02:37.656346 2740506 start.go:902] validating driver "docker" against <nil>
	I0821 11:02:37.656359 2740506 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 11:02:37.656979 2740506 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:02:37.726194 2740506 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-21 11:02:37.716399994 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:02:37.726354 2740506 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 11:02:37.726574 2740506 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 11:02:37.728844 2740506 out.go:177] * Using Docker driver with root privileges
	I0821 11:02:37.731175 2740506 cni.go:84] Creating CNI manager for ""
	I0821 11:02:37.731192 2740506 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 11:02:37.731207 2740506 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0821 11:02:37.731220 2740506 start_flags.go:319] config:
	{Name:addons-664125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-664125 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cn
i FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:02:37.733581 2740506 out.go:177] * Starting control plane node addons-664125 in cluster addons-664125
	I0821 11:02:37.735749 2740506 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 11:02:37.738136 2740506 out.go:177] * Pulling base image ...
	I0821 11:02:37.740567 2740506 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 11:02:37.740618 2740506 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0821 11:02:37.740625 2740506 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4
	I0821 11:02:37.740727 2740506 cache.go:57] Caching tarball of preloaded images
	I0821 11:02:37.740790 2740506 preload.go:174] Found /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0821 11:02:37.740799 2740506 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0821 11:02:37.741119 2740506 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/config.json ...
	I0821 11:02:37.741147 2740506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/config.json: {Name:mkc6c43749b91a52a3a1d46465b37700732701d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:02:37.757272 2740506 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0821 11:02:37.757373 2740506 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0821 11:02:37.757391 2740506 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0821 11:02:37.757396 2740506 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0821 11:02:37.757402 2740506 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0821 11:02:37.757408 2740506 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from local cache
	I0821 11:02:53.576565 2740506 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 from cached tarball
	I0821 11:02:53.576601 2740506 cache.go:195] Successfully downloaded all kic artifacts
	I0821 11:02:53.576679 2740506 start.go:365] acquiring machines lock for addons-664125: {Name:mk2944d751bf796ed5124e835a465e3c7744a8af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:02:53.576804 2740506 start.go:369] acquired machines lock for "addons-664125" in 102.488µs
	I0821 11:02:53.576843 2740506 start.go:93] Provisioning new machine with config: &{Name:addons-664125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-664125 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0821 11:02:53.576933 2740506 start.go:125] createHost starting for "" (driver="docker")
	I0821 11:02:53.579209 2740506 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0821 11:02:53.579441 2740506 start.go:159] libmachine.API.Create for "addons-664125" (driver="docker")
	I0821 11:02:53.579475 2740506 client.go:168] LocalClient.Create starting
	I0821 11:02:53.579582 2740506 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem
	I0821 11:02:54.093393 2740506 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem
	I0821 11:02:55.197218 2740506 cli_runner.go:164] Run: docker network inspect addons-664125 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0821 11:02:55.217336 2740506 cli_runner.go:211] docker network inspect addons-664125 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0821 11:02:55.217423 2740506 network_create.go:281] running [docker network inspect addons-664125] to gather additional debugging logs...
	I0821 11:02:55.217439 2740506 cli_runner.go:164] Run: docker network inspect addons-664125
	W0821 11:02:55.233460 2740506 cli_runner.go:211] docker network inspect addons-664125 returned with exit code 1
	I0821 11:02:55.233496 2740506 network_create.go:284] error running [docker network inspect addons-664125]: docker network inspect addons-664125: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-664125 not found
	I0821 11:02:55.233528 2740506 network_create.go:286] output of [docker network inspect addons-664125]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-664125 not found
	
	** /stderr **
	I0821 11:02:55.233594 2740506 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0821 11:02:55.251822 2740506 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40011c2970}
	I0821 11:02:55.251859 2740506 network_create.go:123] attempt to create docker network addons-664125 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0821 11:02:55.251929 2740506 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-664125 addons-664125
	I0821 11:02:55.326945 2740506 network_create.go:107] docker network addons-664125 192.168.49.0/24 created
	I0821 11:02:55.326973 2740506 kic.go:117] calculated static IP "192.168.49.2" for the "addons-664125" container
	I0821 11:02:55.327049 2740506 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0821 11:02:55.342732 2740506 cli_runner.go:164] Run: docker volume create addons-664125 --label name.minikube.sigs.k8s.io=addons-664125 --label created_by.minikube.sigs.k8s.io=true
	I0821 11:02:55.364818 2740506 oci.go:103] Successfully created a docker volume addons-664125
	I0821 11:02:55.364912 2740506 cli_runner.go:164] Run: docker run --rm --name addons-664125-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-664125 --entrypoint /usr/bin/test -v addons-664125:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0821 11:02:57.205028 2740506 cli_runner.go:217] Completed: docker run --rm --name addons-664125-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-664125 --entrypoint /usr/bin/test -v addons-664125:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (1.840074561s)
	I0821 11:02:57.205059 2740506 oci.go:107] Successfully prepared a docker volume addons-664125
	I0821 11:02:57.205080 2740506 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 11:02:57.205098 2740506 kic.go:190] Starting extracting preloaded images to volume ...
	I0821 11:02:57.205179 2740506 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-664125:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0821 11:03:01.279009 2740506 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-664125:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.073784258s)
	I0821 11:03:01.279040 2740506 kic.go:199] duration metric: took 4.073939 seconds to extract preloaded images to volume
	W0821 11:03:01.279170 2740506 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0821 11:03:01.279276 2740506 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0821 11:03:01.344233 2740506 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-664125 --name addons-664125 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-664125 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-664125 --network addons-664125 --ip 192.168.49.2 --volume addons-664125:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0821 11:03:01.707602 2740506 cli_runner.go:164] Run: docker container inspect addons-664125 --format={{.State.Running}}
	I0821 11:03:01.734830 2740506 cli_runner.go:164] Run: docker container inspect addons-664125 --format={{.State.Status}}
	I0821 11:03:01.759654 2740506 cli_runner.go:164] Run: docker exec addons-664125 stat /var/lib/dpkg/alternatives/iptables
	I0821 11:03:01.860656 2740506 oci.go:144] the created container "addons-664125" has a running status.
	I0821 11:03:01.860684 2740506 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/addons-664125/id_rsa...
	I0821 11:03:02.850573 2740506 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/addons-664125/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0821 11:03:02.877355 2740506 cli_runner.go:164] Run: docker container inspect addons-664125 --format={{.State.Status}}
	I0821 11:03:02.898976 2740506 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0821 11:03:02.899000 2740506 kic_runner.go:114] Args: [docker exec --privileged addons-664125 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0821 11:03:02.973146 2740506 cli_runner.go:164] Run: docker container inspect addons-664125 --format={{.State.Status}}
	I0821 11:03:03.001574 2740506 machine.go:88] provisioning docker machine ...
	I0821 11:03:03.001609 2740506 ubuntu.go:169] provisioning hostname "addons-664125"
	I0821 11:03:03.001687 2740506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-664125
	I0821 11:03:03.033765 2740506 main.go:141] libmachine: Using SSH client type: native
	I0821 11:03:03.034349 2740506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36188 <nil> <nil>}
	I0821 11:03:03.034374 2740506 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-664125 && echo "addons-664125" | sudo tee /etc/hostname
	I0821 11:03:03.189648 2740506 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-664125
	
	I0821 11:03:03.189737 2740506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-664125
	I0821 11:03:03.208551 2740506 main.go:141] libmachine: Using SSH client type: native
	I0821 11:03:03.208991 2740506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36188 <nil> <nil>}
	I0821 11:03:03.209015 2740506 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-664125' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-664125/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-664125' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 11:03:03.343386 2740506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 11:03:03.343455 2740506 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17102-2734539/.minikube CaCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17102-2734539/.minikube}
	I0821 11:03:03.343491 2740506 ubuntu.go:177] setting up certificates
	I0821 11:03:03.343527 2740506 provision.go:83] configureAuth start
	I0821 11:03:03.343610 2740506 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-664125
	I0821 11:03:03.364755 2740506 provision.go:138] copyHostCerts
	I0821 11:03:03.364841 2740506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem (1078 bytes)
	I0821 11:03:03.364965 2740506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem (1123 bytes)
	I0821 11:03:03.365029 2740506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem (1675 bytes)
	I0821 11:03:03.365078 2740506 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca-key.pem org=jenkins.addons-664125 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-664125]
	I0821 11:03:03.855973 2740506 provision.go:172] copyRemoteCerts
	I0821 11:03:03.856043 2740506 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 11:03:03.856083 2740506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-664125
	I0821 11:03:03.875649 2740506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36188 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/addons-664125/id_rsa Username:docker}
	I0821 11:03:03.972521 2740506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 11:03:04.002005 2740506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0821 11:03:04.032955 2740506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0821 11:03:04.061646 2740506 provision.go:86] duration metric: configureAuth took 718.088954ms
	I0821 11:03:04.061672 2740506 ubuntu.go:193] setting minikube options for container-runtime
	I0821 11:03:04.061866 2740506 config.go:182] Loaded profile config "addons-664125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 11:03:04.062062 2740506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-664125
	I0821 11:03:04.079541 2740506 main.go:141] libmachine: Using SSH client type: native
	I0821 11:03:04.079978 2740506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36188 <nil> <nil>}
	I0821 11:03:04.079999 2740506 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0821 11:03:04.323338 2740506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0821 11:03:04.323359 2740506 machine.go:91] provisioned docker machine in 1.321763659s
	I0821 11:03:04.323368 2740506 client.go:171] LocalClient.Create took 10.743887051s
	I0821 11:03:04.323379 2740506 start.go:167] duration metric: libmachine.API.Create for "addons-664125" took 10.743939267s
	I0821 11:03:04.323386 2740506 start.go:300] post-start starting for "addons-664125" (driver="docker")
	I0821 11:03:04.323396 2740506 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 11:03:04.323458 2740506 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 11:03:04.323500 2740506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-664125
	I0821 11:03:04.345311 2740506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36188 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/addons-664125/id_rsa Username:docker}
	I0821 11:03:04.440763 2740506 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 11:03:04.444773 2740506 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0821 11:03:04.444825 2740506 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0821 11:03:04.444838 2740506 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0821 11:03:04.444845 2740506 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0821 11:03:04.444855 2740506 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-2734539/.minikube/addons for local assets ...
	I0821 11:03:04.444922 2740506 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-2734539/.minikube/files for local assets ...
	I0821 11:03:04.444952 2740506 start.go:303] post-start completed in 121.56065ms
	I0821 11:03:04.445262 2740506 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-664125
	I0821 11:03:04.463768 2740506 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/config.json ...
	I0821 11:03:04.464048 2740506 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 11:03:04.464102 2740506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-664125
	I0821 11:03:04.481202 2740506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36188 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/addons-664125/id_rsa Username:docker}
	I0821 11:03:04.571901 2740506 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0821 11:03:04.577509 2740506 start.go:128] duration metric: createHost completed in 11.000561433s
	I0821 11:03:04.577531 2740506 start.go:83] releasing machines lock for "addons-664125", held for 11.000715809s
	I0821 11:03:04.577602 2740506 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-664125
	I0821 11:03:04.594610 2740506 ssh_runner.go:195] Run: cat /version.json
	I0821 11:03:04.594646 2740506 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 11:03:04.594670 2740506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-664125
	I0821 11:03:04.594706 2740506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-664125
	I0821 11:03:04.618742 2740506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36188 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/addons-664125/id_rsa Username:docker}
	I0821 11:03:04.622023 2740506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36188 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/addons-664125/id_rsa Username:docker}
	I0821 11:03:04.841013 2740506 ssh_runner.go:195] Run: systemctl --version
	I0821 11:03:04.846851 2740506 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0821 11:03:04.997077 2740506 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0821 11:03:05.005678 2740506 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:03:05.038942 2740506 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0821 11:03:05.039020 2740506 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:03:05.080755 2740506 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0821 11:03:05.080783 2740506 start.go:466] detecting cgroup driver to use...
	I0821 11:03:05.080831 2740506 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0821 11:03:05.080913 2740506 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 11:03:05.100715 2740506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 11:03:05.114918 2740506 docker.go:196] disabling cri-docker service (if available) ...
	I0821 11:03:05.114982 2740506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0821 11:03:05.131703 2740506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0821 11:03:05.149365 2740506 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0821 11:03:05.252559 2740506 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0821 11:03:05.357731 2740506 docker.go:212] disabling docker service ...
	I0821 11:03:05.357805 2740506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0821 11:03:05.379166 2740506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0821 11:03:05.393159 2740506 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0821 11:03:05.490998 2740506 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0821 11:03:05.588121 2740506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0821 11:03:05.601255 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 11:03:05.620055 2740506 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0821 11:03:05.620127 2740506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:03:05.632100 2740506 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0821 11:03:05.632179 2740506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:03:05.644411 2740506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:03:05.656778 2740506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:03:05.668923 2740506 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0821 11:03:05.679788 2740506 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0821 11:03:05.689807 2740506 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0821 11:03:05.699834 2740506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 11:03:05.797598 2740506 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0821 11:03:05.911506 2740506 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0821 11:03:05.911641 2740506 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0821 11:03:05.916388 2740506 start.go:534] Will wait 60s for crictl version
	I0821 11:03:05.916487 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:03:05.920667 2740506 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0821 11:03:05.962583 2740506 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0821 11:03:05.962747 2740506 ssh_runner.go:195] Run: crio --version
	I0821 11:03:06.005378 2740506 ssh_runner.go:195] Run: crio --version
	I0821 11:03:06.057895 2740506 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	I0821 11:03:06.059688 2740506 cli_runner.go:164] Run: docker network inspect addons-664125 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0821 11:03:06.080534 2740506 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0821 11:03:06.085364 2740506 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 11:03:06.098870 2740506 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 11:03:06.098941 2740506 ssh_runner.go:195] Run: sudo crictl images --output json
	I0821 11:03:06.161060 2740506 crio.go:496] all images are preloaded for cri-o runtime.
	I0821 11:03:06.161079 2740506 crio.go:415] Images already preloaded, skipping extraction
	I0821 11:03:06.161135 2740506 ssh_runner.go:195] Run: sudo crictl images --output json
	I0821 11:03:06.201328 2740506 crio.go:496] all images are preloaded for cri-o runtime.
	I0821 11:03:06.201384 2740506 cache_images.go:84] Images are preloaded, skipping loading
	I0821 11:03:06.201492 2740506 ssh_runner.go:195] Run: crio config
	I0821 11:03:06.254483 2740506 cni.go:84] Creating CNI manager for ""
	I0821 11:03:06.254542 2740506 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 11:03:06.254582 2740506 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0821 11:03:06.254605 2740506 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-664125 NodeName:addons-664125 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0821 11:03:06.254743 2740506 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-664125"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0821 11:03:06.254808 2740506 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-664125 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:addons-664125 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0821 11:03:06.254875 2740506 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0821 11:03:06.265116 2740506 binaries.go:44] Found k8s binaries, skipping transfer
	I0821 11:03:06.265232 2740506 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0821 11:03:06.275544 2740506 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0821 11:03:06.295762 2740506 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0821 11:03:06.316936 2740506 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0821 11:03:06.337867 2740506 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0821 11:03:06.342415 2740506 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 11:03:06.355206 2740506 certs.go:56] Setting up /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125 for IP: 192.168.49.2
	I0821 11:03:06.355236 2740506 certs.go:190] acquiring lock for shared ca certs: {Name:mkf22db11ef8c10db9220127fbe1c5ce3b246b6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:03:06.355422 2740506 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.key
	I0821 11:03:07.550935 2740506 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt ...
	I0821 11:03:07.550967 2740506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt: {Name:mk1c2f6714f8d3bcdad406d4a15601ae38da827d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:03:07.551162 2740506 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.key ...
	I0821 11:03:07.551175 2740506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.key: {Name:mk6cbdd8865a14a4f01a8167c87357bc8f6068ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:03:07.551271 2740506 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.key
	I0821 11:03:09.211352 2740506 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.crt ...
	I0821 11:03:09.211383 2740506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.crt: {Name:mk9c10ac884cd2beda870de5bff2b92ba490dda7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:03:09.212019 2740506 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.key ...
	I0821 11:03:09.212035 2740506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.key: {Name:mkd19590203eafadacc82beb62ca2a5ac8e78bbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:03:09.212166 2740506 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.key
	I0821 11:03:09.212181 2740506 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt with IP's: []
	I0821 11:03:09.452345 2740506 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt ...
	I0821 11:03:09.452374 2740506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: {Name:mk0f060389db834ae58ead0ab860ce160b0aecaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:03:09.453000 2740506 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.key ...
	I0821 11:03:09.453014 2740506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.key: {Name:mk4623f1efa5cef4b1fa26ccdafa301589e1102a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:03:09.453420 2740506 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/apiserver.key.dd3b5fb2
	I0821 11:03:09.453442 2740506 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0821 11:03:09.717147 2740506 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/apiserver.crt.dd3b5fb2 ...
	I0821 11:03:09.717177 2740506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/apiserver.crt.dd3b5fb2: {Name:mkaa740d798f6128fbe964bc0440d61efa1a99f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:03:09.717360 2740506 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/apiserver.key.dd3b5fb2 ...
	I0821 11:03:09.717372 2740506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/apiserver.key.dd3b5fb2: {Name:mkd81103a9383e14c951c17cba99dd3f16489502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:03:09.717447 2740506 certs.go:337] copying /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/apiserver.crt
	I0821 11:03:09.717518 2740506 certs.go:341] copying /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/apiserver.key
	I0821 11:03:09.717567 2740506 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/proxy-client.key
	I0821 11:03:09.717588 2740506 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/proxy-client.crt with IP's: []
	I0821 11:03:09.854998 2740506 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/proxy-client.crt ...
	I0821 11:03:09.855026 2740506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/proxy-client.crt: {Name:mkbba9ad32d0009014819900035f8d77ca84fc8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:03:09.855202 2740506 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/proxy-client.key ...
	I0821 11:03:09.855214 2740506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/proxy-client.key: {Name:mk8f55152e241415068b1a73d08a298831ed34c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:03:09.855792 2740506 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca-key.pem (1679 bytes)
	I0821 11:03:09.855834 2740506 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem (1078 bytes)
	I0821 11:03:09.855864 2740506 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem (1123 bytes)
	I0821 11:03:09.855891 2740506 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem (1675 bytes)
	I0821 11:03:09.856525 2740506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0821 11:03:09.886743 2740506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0821 11:03:09.914891 2740506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0821 11:03:09.942480 2740506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0821 11:03:09.969526 2740506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0821 11:03:09.996619 2740506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0821 11:03:10.030611 2740506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0821 11:03:10.059779 2740506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0821 11:03:10.088546 2740506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0821 11:03:10.118696 2740506 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0821 11:03:10.140160 2740506 ssh_runner.go:195] Run: openssl version
	I0821 11:03:10.147095 2740506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0821 11:03:10.159082 2740506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0821 11:03:10.163781 2740506 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 21 11:03 /usr/share/ca-certificates/minikubeCA.pem
	I0821 11:03:10.163856 2740506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0821 11:03:10.172865 2740506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0821 11:03:10.184510 2740506 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0821 11:03:10.189002 2740506 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0821 11:03:10.189056 2740506 kubeadm.go:404] StartCluster: {Name:addons-664125 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:addons-664125 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clu
ster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:03:10.189144 2740506 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0821 11:03:10.189214 2740506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0821 11:03:10.232448 2740506 cri.go:89] found id: ""
	I0821 11:03:10.232521 2740506 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0821 11:03:10.243188 2740506 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0821 11:03:10.253576 2740506 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0821 11:03:10.253682 2740506 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0821 11:03:10.264371 2740506 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0821 11:03:10.264414 2740506 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0821 11:03:10.319257 2740506 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0821 11:03:10.319491 2740506 kubeadm.go:322] [preflight] Running pre-flight checks
	I0821 11:03:10.367849 2740506 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0821 11:03:10.367920 2740506 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1041-aws
	I0821 11:03:10.367956 2740506 kubeadm.go:322] OS: Linux
	I0821 11:03:10.368004 2740506 kubeadm.go:322] CGROUPS_CPU: enabled
	I0821 11:03:10.368056 2740506 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0821 11:03:10.368105 2740506 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0821 11:03:10.368156 2740506 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0821 11:03:10.368205 2740506 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0821 11:03:10.368259 2740506 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0821 11:03:10.368306 2740506 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0821 11:03:10.368354 2740506 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0821 11:03:10.368400 2740506 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0821 11:03:10.447213 2740506 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0821 11:03:10.447326 2740506 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0821 11:03:10.447431 2740506 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0821 11:03:10.696065 2740506 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0821 11:03:10.699365 2740506 out.go:204]   - Generating certificates and keys ...
	I0821 11:03:10.699474 2740506 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0821 11:03:10.699642 2740506 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0821 11:03:11.081908 2740506 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0821 11:03:11.346278 2740506 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0821 11:03:11.752943 2740506 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0821 11:03:12.034785 2740506 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0821 11:03:12.524667 2740506 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0821 11:03:12.525006 2740506 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-664125 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0821 11:03:12.893955 2740506 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0821 11:03:12.894215 2740506 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-664125 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0821 11:03:13.166043 2740506 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0821 11:03:13.395135 2740506 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0821 11:03:13.555331 2740506 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0821 11:03:13.555612 2740506 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0821 11:03:13.870427 2740506 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0821 11:03:14.330471 2740506 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0821 11:03:14.631761 2740506 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0821 11:03:15.200243 2740506 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0821 11:03:15.211939 2740506 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0821 11:03:15.213173 2740506 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0821 11:03:15.213512 2740506 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0821 11:03:15.318344 2740506 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0821 11:03:15.322240 2740506 out.go:204]   - Booting up control plane ...
	I0821 11:03:15.322360 2740506 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0821 11:03:15.326218 2740506 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0821 11:03:15.330278 2740506 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0821 11:03:15.330368 2740506 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0821 11:03:15.331536 2740506 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0821 11:03:22.834352 2740506 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502260 seconds
	I0821 11:03:22.834472 2740506 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0821 11:03:22.849534 2740506 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0821 11:03:23.374597 2740506 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0821 11:03:23.374780 2740506 kubeadm.go:322] [mark-control-plane] Marking the node addons-664125 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0821 11:03:23.886395 2740506 kubeadm.go:322] [bootstrap-token] Using token: 43h2cq.n0i7o9ghleib15qa
	I0821 11:03:23.888270 2740506 out.go:204]   - Configuring RBAC rules ...
	I0821 11:03:23.888383 2740506 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0821 11:03:23.893014 2740506 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0821 11:03:23.907025 2740506 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0821 11:03:23.910659 2740506 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0821 11:03:23.914331 2740506 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0821 11:03:23.918263 2740506 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0821 11:03:23.934239 2740506 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0821 11:03:24.198261 2740506 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0821 11:03:24.328354 2740506 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0821 11:03:24.329429 2740506 kubeadm.go:322] 
	I0821 11:03:24.329496 2740506 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0821 11:03:24.329506 2740506 kubeadm.go:322] 
	I0821 11:03:24.329578 2740506 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0821 11:03:24.329587 2740506 kubeadm.go:322] 
	I0821 11:03:24.329612 2740506 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0821 11:03:24.329670 2740506 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0821 11:03:24.329722 2740506 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0821 11:03:24.329730 2740506 kubeadm.go:322] 
	I0821 11:03:24.329781 2740506 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0821 11:03:24.329796 2740506 kubeadm.go:322] 
	I0821 11:03:24.329842 2740506 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0821 11:03:24.329850 2740506 kubeadm.go:322] 
	I0821 11:03:24.329910 2740506 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0821 11:03:24.329984 2740506 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0821 11:03:24.330050 2740506 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0821 11:03:24.330059 2740506 kubeadm.go:322] 
	I0821 11:03:24.330138 2740506 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0821 11:03:24.330214 2740506 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0821 11:03:24.330221 2740506 kubeadm.go:322] 
	I0821 11:03:24.330299 2740506 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 43h2cq.n0i7o9ghleib15qa \
	I0821 11:03:24.330399 2740506 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:53df1391c07b454a6b96f5fce415fe23bfbfcda331215b828a9e1234aa2104c1 \
	I0821 11:03:24.330428 2740506 kubeadm.go:322] 	--control-plane 
	I0821 11:03:24.330437 2740506 kubeadm.go:322] 
	I0821 11:03:24.335151 2740506 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0821 11:03:24.335168 2740506 kubeadm.go:322] 
	I0821 11:03:24.335246 2740506 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 43h2cq.n0i7o9ghleib15qa \
	I0821 11:03:24.335356 2740506 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:53df1391c07b454a6b96f5fce415fe23bfbfcda331215b828a9e1234aa2104c1 
	I0821 11:03:24.336947 2740506 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1041-aws\n", err: exit status 1
	I0821 11:03:24.337069 2740506 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0821 11:03:24.337091 2740506 cni.go:84] Creating CNI manager for ""
	I0821 11:03:24.337099 2740506 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 11:03:24.340546 2740506 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0821 11:03:24.342560 2740506 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0821 11:03:24.359249 2740506 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0821 11:03:24.359270 2740506 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0821 11:03:24.415273 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0821 11:03:25.484923 2740506 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.069611531s)
	I0821 11:03:25.484961 2740506 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0821 11:03:25.485070 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43 minikube.k8s.io/name=addons-664125 minikube.k8s.io/updated_at=2023_08_21T11_03_25_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:25.485071 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:25.601813 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:25.601904 2740506 ops.go:34] apiserver oom_adj: -16
	I0821 11:03:25.699045 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:26.291952 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:26.792532 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:27.292536 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:27.792153 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:28.292042 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:28.791735 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:29.292010 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:29.792139 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:30.291852 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:30.791572 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:31.292353 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:31.791573 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:32.292199 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:32.792104 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:33.292227 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:33.791799 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:34.291681 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:34.791545 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:35.291932 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:35.791808 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:36.292100 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:36.791570 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:37.292445 2740506 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:03:37.388729 2740506 kubeadm.go:1081] duration metric: took 11.903714641s to wait for elevateKubeSystemPrivileges.
	I0821 11:03:37.388755 2740506 kubeadm.go:406] StartCluster complete in 27.199703634s
	I0821 11:03:37.388770 2740506 settings.go:142] acquiring lock: {Name:mk3be5267b0ceee2c9bd00120994fcda13aa9019 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:03:37.388880 2740506 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:03:37.389279 2740506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/kubeconfig: {Name:mk4bece1b106c2586469807b701290be2026992b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:03:37.390071 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0821 11:03:37.390373 2740506 config.go:182] Loaded profile config "addons-664125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 11:03:37.390494 2740506 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0821 11:03:37.390585 2740506 addons.go:69] Setting volumesnapshots=true in profile "addons-664125"
	I0821 11:03:37.390598 2740506 addons.go:231] Setting addon volumesnapshots=true in "addons-664125"
	I0821 11:03:37.390653 2740506 host.go:66] Checking if "addons-664125" exists ...
	I0821 11:03:37.391104 2740506 cli_runner.go:164] Run: docker container inspect addons-664125 --format={{.State.Status}}
	I0821 11:03:37.392293 2740506 addons.go:69] Setting ingress-dns=true in profile "addons-664125"
	I0821 11:03:37.392307 2740506 addons.go:231] Setting addon ingress-dns=true in "addons-664125"
	I0821 11:03:37.392345 2740506 host.go:66] Checking if "addons-664125" exists ...
	I0821 11:03:37.392759 2740506 cli_runner.go:164] Run: docker container inspect addons-664125 --format={{.State.Status}}
	I0821 11:03:37.393039 2740506 addons.go:69] Setting inspektor-gadget=true in profile "addons-664125"
	I0821 11:03:37.393066 2740506 addons.go:231] Setting addon inspektor-gadget=true in "addons-664125"
	I0821 11:03:37.393102 2740506 host.go:66] Checking if "addons-664125" exists ...
	I0821 11:03:37.393492 2740506 cli_runner.go:164] Run: docker container inspect addons-664125 --format={{.State.Status}}
	I0821 11:03:37.393591 2740506 addons.go:69] Setting cloud-spanner=true in profile "addons-664125"
	I0821 11:03:37.393611 2740506 addons.go:231] Setting addon cloud-spanner=true in "addons-664125"
	I0821 11:03:37.393638 2740506 host.go:66] Checking if "addons-664125" exists ...
	I0821 11:03:37.394017 2740506 cli_runner.go:164] Run: docker container inspect addons-664125 --format={{.State.Status}}
	I0821 11:03:37.394086 2740506 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-664125"
	I0821 11:03:37.394118 2740506 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-664125"
	I0821 11:03:37.394148 2740506 host.go:66] Checking if "addons-664125" exists ...
	I0821 11:03:37.394500 2740506 cli_runner.go:164] Run: docker container inspect addons-664125 --format={{.State.Status}}
	I0821 11:03:37.394568 2740506 addons.go:69] Setting default-storageclass=true in profile "addons-664125"
	I0821 11:03:37.394580 2740506 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-664125"
	I0821 11:03:37.394785 2740506 cli_runner.go:164] Run: docker container inspect addons-664125 --format={{.State.Status}}
	I0821 11:03:37.394847 2740506 addons.go:69] Setting gcp-auth=true in profile "addons-664125"
	I0821 11:03:37.394874 2740506 mustload.go:65] Loading cluster: addons-664125
	I0821 11:03:37.395022 2740506 config.go:182] Loaded profile config "addons-664125": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 11:03:37.397998 2740506 cli_runner.go:164] Run: docker container inspect addons-664125 --format={{.State.Status}}
	I0821 11:03:37.398097 2740506 addons.go:69] Setting ingress=true in profile "addons-664125"
	I0821 11:03:37.398114 2740506 addons.go:231] Setting addon ingress=true in "addons-664125"
	I0821 11:03:37.398156 2740506 host.go:66] Checking if "addons-664125" exists ...
	I0821 11:03:37.398534 2740506 cli_runner.go:164] Run: docker container inspect addons-664125 --format={{.State.Status}}
	I0821 11:03:37.398822 2740506 addons.go:69] Setting storage-provisioner=true in profile "addons-664125"
	I0821 11:03:37.398839 2740506 addons.go:231] Setting addon storage-provisioner=true in "addons-664125"
	I0821 11:03:37.398869 2740506 host.go:66] Checking if "addons-664125" exists ...
	I0821 11:03:37.399335 2740506 cli_runner.go:164] Run: docker container inspect addons-664125 --format={{.State.Status}}
	I0821 11:03:37.399412 2740506 addons.go:69] Setting metrics-server=true in profile "addons-664125"
	I0821 11:03:37.399422 2740506 addons.go:231] Setting addon metrics-server=true in "addons-664125"
	I0821 11:03:37.399447 2740506 host.go:66] Checking if "addons-664125" exists ...
	I0821 11:03:37.399802 2740506 cli_runner.go:164] Run: docker container inspect addons-664125 --format={{.State.Status}}
	I0821 11:03:37.399859 2740506 addons.go:69] Setting registry=true in profile "addons-664125"
	I0821 11:03:37.399868 2740506 addons.go:231] Setting addon registry=true in "addons-664125"
	I0821 11:03:37.399892 2740506 host.go:66] Checking if "addons-664125" exists ...
	I0821 11:03:37.400229 2740506 cli_runner.go:164] Run: docker container inspect addons-664125 --format={{.State.Status}}
	I0821 11:03:37.478329 2740506 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0821 11:03:37.482198 2740506 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0821 11:03:37.482221 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0821 11:03:37.482312 2740506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-664125
	I0821 11:03:37.502626 2740506 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0821 11:03:37.505837 2740506 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0821 11:03:37.505861 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0821 11:03:37.505991 2740506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-664125
	I0821 11:03:37.543092 2740506 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.19.0
	I0821 11:03:37.566421 2740506 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0821 11:03:37.575313 2740506 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0821 11:03:37.566661 2740506 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0821 11:03:37.578503 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0821 11:03:37.582507 2740506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-664125
	I0821 11:03:37.592876 2740506 out.go:177]   - Using image docker.io/registry:2.8.1
	I0821 11:03:37.595978 2740506 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0821 11:03:37.593034 2740506 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.9
	I0821 11:03:37.593044 2740506 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0821 11:03:37.602267 2740506 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0821 11:03:37.613293 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0821 11:03:37.613360 2740506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-664125
	I0821 11:03:37.613542 2740506 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0821 11:03:37.629667 2740506 host.go:66] Checking if "addons-664125" exists ...
	I0821 11:03:37.647550 2740506 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0821 11:03:37.646271 2740506 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0821 11:03:37.650867 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0821 11:03:37.650947 2740506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-664125
	I0821 11:03:37.662701 2740506 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0821 11:03:37.671424 2740506 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0821 11:03:37.662175 2740506 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-664125" context rescaled to 1 replicas
	I0821 11:03:37.681909 2740506 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0821 11:03:37.683536 2740506 out.go:177] * Verifying Kubernetes components...
	I0821 11:03:37.682876 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0821 11:03:37.683192 2740506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36188 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/addons-664125/id_rsa Username:docker}
	I0821 11:03:37.685672 2740506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 11:03:37.694113 2740506 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0821 11:03:37.701975 2740506 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0821 11:03:37.701998 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0821 11:03:37.702067 2740506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-664125
	I0821 11:03:37.730002 2740506 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.1
	I0821 11:03:37.736985 2740506 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 11:03:37.736852 2740506 addons.go:231] Setting addon default-storageclass=true in "addons-664125"
	I0821 11:03:37.743640 2740506 host.go:66] Checking if "addons-664125" exists ...
	I0821 11:03:37.744102 2740506 cli_runner.go:164] Run: docker container inspect addons-664125 --format={{.State.Status}}
	I0821 11:03:37.746488 2740506 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 11:03:37.743581 2740506 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 11:03:37.753726 2740506 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0821 11:03:37.753750 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0821 11:03:37.753823 2740506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-664125
	I0821 11:03:37.754003 2740506 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0821 11:03:37.754018 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0821 11:03:37.754053 2740506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-664125
	I0821 11:03:37.807694 2740506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36188 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/addons-664125/id_rsa Username:docker}
	I0821 11:03:37.808467 2740506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36188 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/addons-664125/id_rsa Username:docker}
	I0821 11:03:37.821571 2740506 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0821 11:03:37.826042 2740506 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0821 11:03:37.826108 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0821 11:03:37.826209 2740506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-664125
	I0821 11:03:37.851582 2740506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36188 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/addons-664125/id_rsa Username:docker}
	I0821 11:03:37.865992 2740506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36188 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/addons-664125/id_rsa Username:docker}
	I0821 11:03:37.878109 2740506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36188 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/addons-664125/id_rsa Username:docker}
	I0821 11:03:37.921479 2740506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36188 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/addons-664125/id_rsa Username:docker}
	I0821 11:03:37.950016 2740506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36188 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/addons-664125/id_rsa Username:docker}
	I0821 11:03:37.950583 2740506 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0821 11:03:37.950596 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0821 11:03:37.950650 2740506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-664125
	I0821 11:03:37.959155 2740506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36188 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/addons-664125/id_rsa Username:docker}
	I0821 11:03:37.995423 2740506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36188 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/addons-664125/id_rsa Username:docker}
	I0821 11:03:38.033038 2740506 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0821 11:03:38.033059 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0821 11:03:38.200751 2740506 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0821 11:03:38.200782 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0821 11:03:38.210930 2740506 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0821 11:03:38.210954 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0821 11:03:38.266075 2740506 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0821 11:03:38.266107 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0821 11:03:38.271366 2740506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0821 11:03:38.298125 2740506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0821 11:03:38.342011 2740506 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0821 11:03:38.342038 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0821 11:03:38.344174 2740506 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0821 11:03:38.344230 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0821 11:03:38.356185 2740506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0821 11:03:38.375449 2740506 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0821 11:03:38.375523 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0821 11:03:38.406246 2740506 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0821 11:03:38.406403 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0821 11:03:38.406381 2740506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0821 11:03:38.411453 2740506 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0821 11:03:38.411513 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0821 11:03:38.417209 2740506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0821 11:03:38.473520 2740506 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0821 11:03:38.473592 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0821 11:03:38.477020 2740506 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0821 11:03:38.477078 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0821 11:03:38.516133 2740506 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0821 11:03:38.516210 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0821 11:03:38.534072 2740506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0821 11:03:38.571836 2740506 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0821 11:03:38.571894 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0821 11:03:38.587453 2740506 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0821 11:03:38.587517 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0821 11:03:38.611731 2740506 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 11:03:38.611802 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0821 11:03:38.690445 2740506 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0821 11:03:38.690523 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0821 11:03:38.732985 2740506 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0821 11:03:38.733056 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0821 11:03:38.736578 2740506 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0821 11:03:38.736639 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0821 11:03:38.752281 2740506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 11:03:38.884424 2740506 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0821 11:03:38.884491 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0821 11:03:38.902765 2740506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0821 11:03:38.907687 2740506 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0821 11:03:38.907755 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0821 11:03:39.024545 2740506 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0821 11:03:39.024621 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0821 11:03:39.048061 2740506 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0821 11:03:39.048128 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0821 11:03:39.126493 2740506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0821 11:03:39.190172 2740506 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0821 11:03:39.190196 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0821 11:03:39.236421 2740506 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0821 11:03:39.236491 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0821 11:03:39.314677 2740506 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0821 11:03:39.314738 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0821 11:03:39.393716 2740506 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0821 11:03:39.393794 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0821 11:03:39.448687 2740506 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0821 11:03:39.448754 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0821 11:03:39.577682 2740506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0821 11:03:40.011859 2740506 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.326159726s)
	I0821 11:03:40.012004 2740506 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0821 11:03:40.011969 2740506 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.317883945s)
	I0821 11:03:40.012994 2740506 node_ready.go:35] waiting up to 6m0s for node "addons-664125" to be "Ready" ...
	I0821 11:03:41.874510 2740506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.60310921s)
	I0821 11:03:41.948498 2740506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.650334672s)
	I0821 11:03:42.443606 2740506 node_ready.go:58] node "addons-664125" has status "Ready":"False"
	I0821 11:03:43.143815 2740506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.787548993s)
	I0821 11:03:43.143848 2740506 addons.go:467] Verifying addon ingress=true in "addons-664125"
	I0821 11:03:43.145849 2740506 out.go:177] * Verifying ingress addon...
	I0821 11:03:43.143995 2740506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.737485976s)
	I0821 11:03:43.144037 2740506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.726759848s)
	I0821 11:03:43.144064 2740506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.609918775s)
	I0821 11:03:43.145952 2740506 addons.go:467] Verifying addon registry=true in "addons-664125"
	I0821 11:03:43.147951 2740506 out.go:177] * Verifying registry addon...
	I0821 11:03:43.144198 2740506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.24135892s)
	I0821 11:03:43.144398 2740506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (4.017676329s)
	I0821 11:03:43.144146 2740506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.391787663s)
	W0821 11:03:43.150215 2740506 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0821 11:03:43.150262 2740506 retry.go:31] will retry after 167.096931ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0821 11:03:43.151044 2740506 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0821 11:03:43.151847 2740506 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0821 11:03:43.151975 2740506 addons.go:467] Verifying addon metrics-server=true in "addons-664125"
	I0821 11:03:43.164705 2740506 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0821 11:03:43.164737 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:43.177925 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:43.180240 2740506 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0821 11:03:43.180266 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:43.187263 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:43.317827 2740506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0821 11:03:43.632933 2740506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.055155646s)
	I0821 11:03:43.633013 2740506 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-664125"
	I0821 11:03:43.636834 2740506 out.go:177] * Verifying csi-hostpath-driver addon...
	I0821 11:03:43.640038 2740506 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0821 11:03:43.655413 2740506 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0821 11:03:43.655437 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:43.667762 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:43.684755 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:43.691850 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:44.189305 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:44.219332 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:44.220075 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:44.674097 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:44.691549 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:44.704121 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:44.758749 2740506 node_ready.go:58] node "addons-664125" has status "Ready":"False"
	I0821 11:03:45.204362 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:45.237679 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:45.268251 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:45.276988 2740506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.959013721s)
	I0821 11:03:45.674479 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:45.683483 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:45.692963 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:45.867849 2740506 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0821 11:03:45.867977 2740506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-664125
	I0821 11:03:45.917400 2740506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36188 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/addons-664125/id_rsa Username:docker}
	I0821 11:03:46.120025 2740506 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0821 11:03:46.172809 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:46.183164 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:46.189849 2740506 addons.go:231] Setting addon gcp-auth=true in "addons-664125"
	I0821 11:03:46.189950 2740506 host.go:66] Checking if "addons-664125" exists ...
	I0821 11:03:46.190490 2740506 cli_runner.go:164] Run: docker container inspect addons-664125 --format={{.State.Status}}
	I0821 11:03:46.194378 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:46.242055 2740506 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0821 11:03:46.242112 2740506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-664125
	I0821 11:03:46.276288 2740506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36188 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/addons-664125/id_rsa Username:docker}
	I0821 11:03:46.405013 2740506 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0821 11:03:46.407352 2740506 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0821 11:03:46.409676 2740506 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0821 11:03:46.409696 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0821 11:03:46.481764 2740506 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0821 11:03:46.481783 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0821 11:03:46.539415 2740506 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0821 11:03:46.539434 2740506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0821 11:03:46.604092 2740506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0821 11:03:46.672552 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:46.682980 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:46.692154 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:47.173206 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:47.187632 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:47.202716 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:47.255632 2740506 node_ready.go:58] node "addons-664125" has status "Ready":"False"
	I0821 11:03:47.752506 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:47.753504 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:47.779216 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:47.873492 2740506 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.269325745s)
	I0821 11:03:47.875472 2740506 addons.go:467] Verifying addon gcp-auth=true in "addons-664125"
	I0821 11:03:47.877769 2740506 out.go:177] * Verifying gcp-auth addon...
	I0821 11:03:47.880774 2740506 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0821 11:03:47.904101 2740506 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0821 11:03:47.904129 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:47.909327 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:48.172622 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:48.182194 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:48.191647 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:48.413428 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:48.673546 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:48.682869 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:48.696361 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:48.913981 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:49.173216 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:49.183519 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:49.192291 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:49.258184 2740506 node_ready.go:58] node "addons-664125" has status "Ready":"False"
	I0821 11:03:49.413979 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:49.672888 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:49.683420 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:49.691846 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:49.913168 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:50.173528 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:50.183051 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:50.195508 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:50.414000 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:50.672963 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:50.684031 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:50.692365 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:50.913130 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:51.172989 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:51.183336 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:51.192223 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:51.413967 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:51.673005 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:51.682749 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:51.691991 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:51.755572 2740506 node_ready.go:58] node "addons-664125" has status "Ready":"False"
	I0821 11:03:51.914348 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:52.172930 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:52.182686 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:52.192262 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:52.413569 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:52.672785 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:52.683910 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:52.692755 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:52.915086 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:53.172706 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:53.182339 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:53.191661 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:53.413410 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:53.673366 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:53.683401 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:53.691829 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:53.756280 2740506 node_ready.go:58] node "addons-664125" has status "Ready":"False"
	I0821 11:03:53.913472 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:54.181468 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:54.187129 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:54.196191 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:54.417295 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:54.685755 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:54.689081 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:54.694973 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:54.913711 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:55.172925 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:55.184740 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:55.196812 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:55.413864 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:55.673182 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:55.682171 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:55.692761 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:55.757827 2740506 node_ready.go:58] node "addons-664125" has status "Ready":"False"
	I0821 11:03:55.913571 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:56.172376 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:56.182480 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:56.191684 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:56.413582 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:56.672887 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:56.682010 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:56.692102 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:56.913486 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:57.172799 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:57.183274 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:57.191259 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:57.413409 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:57.672840 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:57.682052 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:57.692204 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:57.913957 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:58.173625 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:58.183230 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:58.193047 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:58.254576 2740506 node_ready.go:58] node "addons-664125" has status "Ready":"False"
	I0821 11:03:58.413154 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:58.672892 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:58.681928 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:58.692126 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:58.914974 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:59.172553 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:59.182768 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:59.191826 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:59.412914 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:03:59.672543 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:03:59.682147 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:03:59.692337 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:03:59.912831 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:00.174374 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:00.199449 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:00.203890 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:00.255052 2740506 node_ready.go:58] node "addons-664125" has status "Ready":"False"
	I0821 11:04:00.413195 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:00.672094 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:00.683044 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:00.692392 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:00.913002 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:01.172840 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:01.182944 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:01.192328 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:01.412894 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:01.673133 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:01.682526 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:01.691639 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:01.914235 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:02.173152 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:02.182624 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:02.192890 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:02.255125 2740506 node_ready.go:58] node "addons-664125" has status "Ready":"False"
	I0821 11:04:02.413307 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:02.673070 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:02.682108 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:02.691352 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:02.913848 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:03.175394 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:03.183097 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:03.202176 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:03.413187 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:03.672969 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:03.683246 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:03.692618 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:03.913354 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:04.172572 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:04.182320 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:04.191371 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:04.413595 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:04.672796 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:04.682341 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:04.691725 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:04.754732 2740506 node_ready.go:58] node "addons-664125" has status "Ready":"False"
	I0821 11:04:04.912740 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:05.172255 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:05.182943 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:05.192228 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:05.413282 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:05.672248 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:05.682222 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:05.691165 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:05.913227 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:06.172699 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:06.183077 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:06.191418 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:06.413479 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:06.672885 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:06.682246 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:06.691390 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:06.913042 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:07.172822 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:07.182700 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:07.191691 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:07.254735 2740506 node_ready.go:58] node "addons-664125" has status "Ready":"False"
	I0821 11:04:07.413297 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:07.672790 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:07.682420 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:07.691144 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:07.913204 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:08.173604 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:08.183287 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:08.191418 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:08.413623 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:08.672809 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:08.683135 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:08.691867 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:08.913646 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:09.173427 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:09.182895 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:09.191667 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:09.412840 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:09.672496 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:09.682045 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:09.692683 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:09.754128 2740506 node_ready.go:58] node "addons-664125" has status "Ready":"False"
	I0821 11:04:09.913628 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:10.174183 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:10.182483 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:10.191722 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:10.412861 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:10.672623 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:10.683816 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:10.691799 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:10.912757 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:11.172213 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:11.182798 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:11.192157 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:11.413229 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:11.672547 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:11.683528 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:11.691732 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:11.754472 2740506 node_ready.go:58] node "addons-664125" has status "Ready":"False"
	I0821 11:04:11.912998 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:12.174434 2740506 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0821 11:04:12.174460 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:12.187890 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:12.194599 2740506 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0821 11:04:12.194662 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:12.262839 2740506 node_ready.go:49] node "addons-664125" has status "Ready":"True"
	I0821 11:04:12.262861 2740506 node_ready.go:38] duration metric: took 32.24979801s waiting for node "addons-664125" to be "Ready" ...
	I0821 11:04:12.262870 2740506 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 11:04:12.294064 2740506 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-qkb55" in "kube-system" namespace to be "Ready" ...
	I0821 11:04:12.420002 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:12.720422 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:12.744242 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:12.745121 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:12.917257 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:13.175133 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:13.183666 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:13.193738 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:13.413188 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:13.673020 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:13.682838 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:13.692048 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:13.907460 2740506 pod_ready.go:92] pod "coredns-5d78c9869d-qkb55" in "kube-system" namespace has status "Ready":"True"
	I0821 11:04:13.907482 2740506 pod_ready.go:81] duration metric: took 1.613383993s waiting for pod "coredns-5d78c9869d-qkb55" in "kube-system" namespace to be "Ready" ...
	I0821 11:04:13.907503 2740506 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-664125" in "kube-system" namespace to be "Ready" ...
	I0821 11:04:13.916101 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:13.916388 2740506 pod_ready.go:92] pod "etcd-addons-664125" in "kube-system" namespace has status "Ready":"True"
	I0821 11:04:13.916403 2740506 pod_ready.go:81] duration metric: took 8.867614ms waiting for pod "etcd-addons-664125" in "kube-system" namespace to be "Ready" ...
	I0821 11:04:13.916420 2740506 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-664125" in "kube-system" namespace to be "Ready" ...
	I0821 11:04:13.923292 2740506 pod_ready.go:92] pod "kube-apiserver-addons-664125" in "kube-system" namespace has status "Ready":"True"
	I0821 11:04:13.923317 2740506 pod_ready.go:81] duration metric: took 6.881734ms waiting for pod "kube-apiserver-addons-664125" in "kube-system" namespace to be "Ready" ...
	I0821 11:04:13.923329 2740506 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-664125" in "kube-system" namespace to be "Ready" ...
	I0821 11:04:13.937576 2740506 pod_ready.go:92] pod "kube-controller-manager-addons-664125" in "kube-system" namespace has status "Ready":"True"
	I0821 11:04:13.937600 2740506 pod_ready.go:81] duration metric: took 14.263584ms waiting for pod "kube-controller-manager-addons-664125" in "kube-system" namespace to be "Ready" ...
	I0821 11:04:13.937616 2740506 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l8g45" in "kube-system" namespace to be "Ready" ...
	I0821 11:04:14.175063 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:14.183247 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:14.192910 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:14.255848 2740506 pod_ready.go:92] pod "kube-proxy-l8g45" in "kube-system" namespace has status "Ready":"True"
	I0821 11:04:14.255922 2740506 pod_ready.go:81] duration metric: took 318.298633ms waiting for pod "kube-proxy-l8g45" in "kube-system" namespace to be "Ready" ...
	I0821 11:04:14.255949 2740506 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-664125" in "kube-system" namespace to be "Ready" ...
	I0821 11:04:14.412890 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:14.656669 2740506 pod_ready.go:92] pod "kube-scheduler-addons-664125" in "kube-system" namespace has status "Ready":"True"
	I0821 11:04:14.656737 2740506 pod_ready.go:81] duration metric: took 400.767585ms waiting for pod "kube-scheduler-addons-664125" in "kube-system" namespace to be "Ready" ...
	I0821 11:04:14.656779 2740506 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7746886d4f-prk24" in "kube-system" namespace to be "Ready" ...
	I0821 11:04:14.676519 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:14.684521 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:14.698758 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:14.914405 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:15.174978 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:15.184113 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:15.194332 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:15.413685 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:15.676172 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:15.685289 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:15.693717 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:15.913359 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:16.175319 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:16.212296 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:16.216482 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:16.413855 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:16.674066 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:16.683361 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:16.693691 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:16.914007 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:16.968852 2740506 pod_ready.go:102] pod "metrics-server-7746886d4f-prk24" in "kube-system" namespace has status "Ready":"False"
	I0821 11:04:17.174918 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:17.183971 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:17.192703 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:17.415318 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:17.675218 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:17.684238 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:17.705049 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:17.915065 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:18.177112 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:18.197638 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:18.200221 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:18.414585 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:18.675155 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:18.686661 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:18.694002 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:18.913263 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:18.969555 2740506 pod_ready.go:102] pod "metrics-server-7746886d4f-prk24" in "kube-system" namespace has status "Ready":"False"
	I0821 11:04:19.174314 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:19.182270 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:19.192849 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:19.412981 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:19.673654 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:19.682705 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:19.692090 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:19.915761 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:20.174428 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:20.182707 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:20.192354 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:20.414043 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:20.676546 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:20.683544 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:20.694098 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:20.913828 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:20.979043 2740506 pod_ready.go:102] pod "metrics-server-7746886d4f-prk24" in "kube-system" namespace has status "Ready":"False"
	I0821 11:04:21.178060 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:21.184187 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:21.195199 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:21.417987 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:21.675488 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:21.684840 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:21.693334 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:21.913728 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:22.174473 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:22.183340 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:22.192609 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:22.415581 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:22.673705 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:22.682529 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:22.692184 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:22.929634 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:23.176364 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:23.185292 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:23.193995 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:23.415476 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:23.464948 2740506 pod_ready.go:102] pod "metrics-server-7746886d4f-prk24" in "kube-system" namespace has status "Ready":"False"
	I0821 11:04:23.673970 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:23.683098 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:23.693071 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:23.913282 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:24.174481 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:24.183377 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:24.191984 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:24.413074 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:24.682151 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:24.688135 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:24.701355 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:24.913589 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:25.175224 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:25.184444 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:25.192843 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:25.415532 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:25.479505 2740506 pod_ready.go:102] pod "metrics-server-7746886d4f-prk24" in "kube-system" namespace has status "Ready":"False"
	I0821 11:04:25.676260 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:25.685089 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:25.694937 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:25.922483 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:26.174900 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:26.188127 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:26.195968 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:26.413479 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:26.674319 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:26.683303 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:26.693526 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:26.915876 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:27.173518 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:27.182858 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:27.192403 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:27.413723 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:27.673517 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:27.684460 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:27.692385 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:27.912963 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:27.971068 2740506 pod_ready.go:102] pod "metrics-server-7746886d4f-prk24" in "kube-system" namespace has status "Ready":"False"
	I0821 11:04:28.195447 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:28.198081 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:28.201931 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:28.415584 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:28.673793 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:28.682914 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:28.692368 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:28.917394 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:29.174173 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:29.185366 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:29.195274 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:29.413269 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:29.674756 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:29.683784 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:29.693205 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:29.913352 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:30.175596 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:30.184147 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:30.193922 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:30.413978 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:30.467510 2740506 pod_ready.go:102] pod "metrics-server-7746886d4f-prk24" in "kube-system" namespace has status "Ready":"False"
	I0821 11:04:30.675107 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:30.682896 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:30.693148 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:30.915153 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:31.177911 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:31.184820 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:31.202302 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:31.415687 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:31.677942 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:31.686106 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:31.699247 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:31.916326 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:32.174979 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:32.184273 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:32.193083 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:32.416368 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:32.674913 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:32.683639 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:32.692345 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:32.914331 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:32.966167 2740506 pod_ready.go:102] pod "metrics-server-7746886d4f-prk24" in "kube-system" namespace has status "Ready":"False"
	I0821 11:04:33.199576 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:33.199807 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:33.200470 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:33.417203 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:33.675584 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:33.683680 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:33.693802 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:33.914739 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:34.174356 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:34.183765 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:34.194684 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:34.413563 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:34.674748 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:34.682953 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:34.696092 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:34.912973 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:35.173408 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:35.182683 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:35.192298 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:35.412938 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:35.464953 2740506 pod_ready.go:102] pod "metrics-server-7746886d4f-prk24" in "kube-system" namespace has status "Ready":"False"
	I0821 11:04:35.674262 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:35.684334 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:35.692009 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:35.916447 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:36.176959 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:36.183204 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:36.192774 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:36.414256 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:36.678704 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:36.700102 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:36.702337 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:36.913234 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:37.173577 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:37.183069 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:37.192379 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:37.414899 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:37.465560 2740506 pod_ready.go:102] pod "metrics-server-7746886d4f-prk24" in "kube-system" namespace has status "Ready":"False"
	I0821 11:04:37.673768 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:37.683004 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:37.692491 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:37.913151 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:38.176281 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:38.184086 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:38.192930 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:38.413710 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:38.674301 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:38.682619 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:38.692236 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:38.913537 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:39.181048 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:39.194126 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:39.194996 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:39.414314 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:39.674836 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:39.682337 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:39.692471 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:39.927829 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:39.966297 2740506 pod_ready.go:102] pod "metrics-server-7746886d4f-prk24" in "kube-system" namespace has status "Ready":"False"
	I0821 11:04:40.176150 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:40.186031 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:40.203355 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:40.414933 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:40.674630 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:40.684074 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:40.693325 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:40.914004 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:41.176421 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:41.184691 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:41.193525 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:41.414379 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:41.675136 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:41.684462 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:41.692892 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:41.914529 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:41.973516 2740506 pod_ready.go:102] pod "metrics-server-7746886d4f-prk24" in "kube-system" namespace has status "Ready":"False"
	I0821 11:04:42.177724 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:42.183005 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:42.193409 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:42.426241 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:42.467057 2740506 pod_ready.go:92] pod "metrics-server-7746886d4f-prk24" in "kube-system" namespace has status "Ready":"True"
	I0821 11:04:42.467083 2740506 pod_ready.go:81] duration metric: took 27.810279193s waiting for pod "metrics-server-7746886d4f-prk24" in "kube-system" namespace to be "Ready" ...
	I0821 11:04:42.467107 2740506 pod_ready.go:38] duration metric: took 30.204214651s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 11:04:42.467128 2740506 api_server.go:52] waiting for apiserver process to appear ...
	I0821 11:04:42.467152 2740506 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0821 11:04:42.467212 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0821 11:04:42.516310 2740506 cri.go:89] found id: "6cbd4e54615cf75c28330ae999f7fca6dc25bac4cba64478a4f26b16ee3870ab"
	I0821 11:04:42.516331 2740506 cri.go:89] found id: ""
	I0821 11:04:42.516338 2740506 logs.go:284] 1 containers: [6cbd4e54615cf75c28330ae999f7fca6dc25bac4cba64478a4f26b16ee3870ab]
	I0821 11:04:42.516406 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:04:42.520695 2740506 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0821 11:04:42.522011 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0821 11:04:42.571575 2740506 cri.go:89] found id: "1875b24e0acf054bd167979cd03c603ca7e36ab44d8f67e5d1c361bb8fa371b8"
	I0821 11:04:42.571600 2740506 cri.go:89] found id: ""
	I0821 11:04:42.571607 2740506 logs.go:284] 1 containers: [1875b24e0acf054bd167979cd03c603ca7e36ab44d8f67e5d1c361bb8fa371b8]
	I0821 11:04:42.571689 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:04:42.576513 2740506 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0821 11:04:42.576590 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0821 11:04:42.621735 2740506 cri.go:89] found id: "f05a8dcc4ed32441d3886e012ec4b57ea5149ad9279182a34c8788d9edecaa80"
	I0821 11:04:42.621782 2740506 cri.go:89] found id: ""
	I0821 11:04:42.621790 2740506 logs.go:284] 1 containers: [f05a8dcc4ed32441d3886e012ec4b57ea5149ad9279182a34c8788d9edecaa80]
	I0821 11:04:42.621849 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:04:42.626557 2740506 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0821 11:04:42.626627 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0821 11:04:42.667870 2740506 cri.go:89] found id: "da1aa61726154ba00afa1906d84454fa99d1291d023aee6fb2cb4eb5fe156ff4"
	I0821 11:04:42.667889 2740506 cri.go:89] found id: ""
	I0821 11:04:42.667897 2740506 logs.go:284] 1 containers: [da1aa61726154ba00afa1906d84454fa99d1291d023aee6fb2cb4eb5fe156ff4]
	I0821 11:04:42.667950 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:04:42.674134 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:42.674400 2740506 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0821 11:04:42.674481 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0821 11:04:42.685094 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:42.693491 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0821 11:04:42.721383 2740506 cri.go:89] found id: "2f45118b518de48285b26e81d211404293caedb1b87911046f526f82b6cf40bc"
	I0821 11:04:42.721406 2740506 cri.go:89] found id: ""
	I0821 11:04:42.721413 2740506 logs.go:284] 1 containers: [2f45118b518de48285b26e81d211404293caedb1b87911046f526f82b6cf40bc]
	I0821 11:04:42.721469 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:04:42.725870 2740506 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0821 11:04:42.725978 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0821 11:04:42.784568 2740506 cri.go:89] found id: "fd62ade93bd47e62c2992f2879b7f9a3174f9aff70f88ee58f6b6fe2604f54d4"
	I0821 11:04:42.784591 2740506 cri.go:89] found id: ""
	I0821 11:04:42.784598 2740506 logs.go:284] 1 containers: [fd62ade93bd47e62c2992f2879b7f9a3174f9aff70f88ee58f6b6fe2604f54d4]
	I0821 11:04:42.784679 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:04:42.790042 2740506 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0821 11:04:42.790111 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0821 11:04:42.836127 2740506 cri.go:89] found id: "135c173990081138d57a5c8c0d7e2b8c5c0b52a9a84d5cb1d9de43de7729cc01"
	I0821 11:04:42.836148 2740506 cri.go:89] found id: ""
	I0821 11:04:42.836155 2740506 logs.go:284] 1 containers: [135c173990081138d57a5c8c0d7e2b8c5c0b52a9a84d5cb1d9de43de7729cc01]
	I0821 11:04:42.836212 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:04:42.840747 2740506 logs.go:123] Gathering logs for dmesg ...
	I0821 11:04:42.840775 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0821 11:04:42.862869 2740506 logs.go:123] Gathering logs for describe nodes ...
	I0821 11:04:42.862895 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0821 11:04:42.914038 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:43.082816 2740506 logs.go:123] Gathering logs for coredns [f05a8dcc4ed32441d3886e012ec4b57ea5149ad9279182a34c8788d9edecaa80] ...
	I0821 11:04:43.082884 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f05a8dcc4ed32441d3886e012ec4b57ea5149ad9279182a34c8788d9edecaa80"
	I0821 11:04:43.167673 2740506 logs.go:123] Gathering logs for kube-scheduler [da1aa61726154ba00afa1906d84454fa99d1291d023aee6fb2cb4eb5fe156ff4] ...
	I0821 11:04:43.167815 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da1aa61726154ba00afa1906d84454fa99d1291d023aee6fb2cb4eb5fe156ff4"
	I0821 11:04:43.174672 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:43.186666 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:43.193512 2740506 kapi.go:107] duration metric: took 1m0.042458379s to wait for kubernetes.io/minikube-addons=registry ...
	I0821 11:04:43.315766 2740506 logs.go:123] Gathering logs for kube-proxy [2f45118b518de48285b26e81d211404293caedb1b87911046f526f82b6cf40bc] ...
	I0821 11:04:43.315849 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f45118b518de48285b26e81d211404293caedb1b87911046f526f82b6cf40bc"
	I0821 11:04:43.400088 2740506 logs.go:123] Gathering logs for CRI-O ...
	I0821 11:04:43.400118 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0821 11:04:43.424951 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:43.568373 2740506 logs.go:123] Gathering logs for container status ...
	I0821 11:04:43.568408 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0821 11:04:43.696738 2740506 logs.go:123] Gathering logs for kubelet ...
	I0821 11:04:43.696766 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0821 11:04:43.702865 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:43.713139 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0821 11:04:43.776017 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: W0821 11:04:12.094169    1365 reflector.go:533] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-664125" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-664125' and this object
	W0821 11:04:43.776247 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.094211    1365 reflector.go:148] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-664125" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-664125' and this object
	W0821 11:04:43.777115 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: W0821 11:04:12.123082    1365 reflector.go:533] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-664125" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:04:43.777319 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.123132    1365 reflector.go:148] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-664125" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:04:43.777494 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: W0821 11:04:12.123312    1365 reflector.go:533] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:04:43.777688 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.123339    1365 reflector.go:148] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:04:43.777894 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: W0821 11:04:12.123706    1365 reflector.go:533] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-664125' and this object
	W0821 11:04:43.778103 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.123747    1365 reflector.go:148] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-664125' and this object
	I0821 11:04:43.803804 2740506 logs.go:123] Gathering logs for kube-apiserver [6cbd4e54615cf75c28330ae999f7fca6dc25bac4cba64478a4f26b16ee3870ab] ...
	I0821 11:04:43.803836 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cbd4e54615cf75c28330ae999f7fca6dc25bac4cba64478a4f26b16ee3870ab"
	I0821 11:04:43.915541 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:43.973161 2740506 logs.go:123] Gathering logs for etcd [1875b24e0acf054bd167979cd03c603ca7e36ab44d8f67e5d1c361bb8fa371b8] ...
	I0821 11:04:43.973199 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1875b24e0acf054bd167979cd03c603ca7e36ab44d8f67e5d1c361bb8fa371b8"
	I0821 11:04:44.056707 2740506 logs.go:123] Gathering logs for kube-controller-manager [fd62ade93bd47e62c2992f2879b7f9a3174f9aff70f88ee58f6b6fe2604f54d4] ...
	I0821 11:04:44.056745 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd62ade93bd47e62c2992f2879b7f9a3174f9aff70f88ee58f6b6fe2604f54d4"
	I0821 11:04:44.152869 2740506 logs.go:123] Gathering logs for kindnet [135c173990081138d57a5c8c0d7e2b8c5c0b52a9a84d5cb1d9de43de7729cc01] ...
	I0821 11:04:44.152952 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 135c173990081138d57a5c8c0d7e2b8c5c0b52a9a84d5cb1d9de43de7729cc01"
	I0821 11:04:44.176042 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:44.184272 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:44.218970 2740506 out.go:309] Setting ErrFile to fd 2...
	I0821 11:04:44.218998 2740506 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0821 11:04:44.219049 2740506 out.go:239] X Problems detected in kubelet:
	W0821 11:04:44.219064 2740506 out.go:239]   Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.123132    1365 reflector.go:148] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-664125" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:04:44.219072 2740506 out.go:239]   Aug 21 11:04:12 addons-664125 kubelet[1365]: W0821 11:04:12.123312    1365 reflector.go:533] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:04:44.219083 2740506 out.go:239]   Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.123339    1365 reflector.go:148] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:04:44.219095 2740506 out.go:239]   Aug 21 11:04:12 addons-664125 kubelet[1365]: W0821 11:04:12.123706    1365 reflector.go:533] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-664125' and this object
	W0821 11:04:44.219102 2740506 out.go:239]   Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.123747    1365 reflector.go:148] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-664125' and this object
	I0821 11:04:44.219111 2740506 out.go:309] Setting ErrFile to fd 2...
	I0821 11:04:44.219117 2740506 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:04:44.413105 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:44.674031 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:44.683392 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:44.913859 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:45.176453 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:45.183097 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:45.414057 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:45.674012 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:45.682852 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:45.913790 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:46.174051 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:46.182601 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:46.413249 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:46.673309 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:46.682376 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:46.913291 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:47.174166 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:47.182621 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:47.413577 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:47.674084 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:47.683143 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:47.914367 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:48.235654 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:48.237236 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:48.417080 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:48.673533 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:48.682714 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:48.913542 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:49.173351 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:49.182899 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:49.413762 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:49.674106 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:49.683078 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:49.913307 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:50.174824 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:50.183308 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:50.426652 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:50.674082 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:50.686596 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:50.913300 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:51.173568 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:51.182944 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:51.420198 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:51.674244 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:51.683162 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:51.913883 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:52.174494 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:52.183361 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:52.415880 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:52.680769 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:52.691472 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:52.913955 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:53.175582 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:53.184413 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:53.413244 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:53.673938 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:53.682229 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:53.913235 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:54.183452 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:54.186960 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:54.220159 2740506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 11:04:54.251237 2740506 api_server.go:72] duration metric: took 1m16.56928964s to wait for apiserver process to appear ...
	I0821 11:04:54.251306 2740506 api_server.go:88] waiting for apiserver healthz status ...
	I0821 11:04:54.251338 2740506 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0821 11:04:54.251431 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0821 11:04:54.348271 2740506 cri.go:89] found id: "6cbd4e54615cf75c28330ae999f7fca6dc25bac4cba64478a4f26b16ee3870ab"
	I0821 11:04:54.348340 2740506 cri.go:89] found id: ""
	I0821 11:04:54.348360 2740506 logs.go:284] 1 containers: [6cbd4e54615cf75c28330ae999f7fca6dc25bac4cba64478a4f26b16ee3870ab]
	I0821 11:04:54.348447 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:04:54.362625 2740506 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0821 11:04:54.362695 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0821 11:04:54.414269 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:54.436886 2740506 cri.go:89] found id: "1875b24e0acf054bd167979cd03c603ca7e36ab44d8f67e5d1c361bb8fa371b8"
	I0821 11:04:54.436955 2740506 cri.go:89] found id: ""
	I0821 11:04:54.436989 2740506 logs.go:284] 1 containers: [1875b24e0acf054bd167979cd03c603ca7e36ab44d8f67e5d1c361bb8fa371b8]
	I0821 11:04:54.437082 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:04:54.442462 2740506 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0821 11:04:54.442576 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0821 11:04:54.490871 2740506 cri.go:89] found id: "f05a8dcc4ed32441d3886e012ec4b57ea5149ad9279182a34c8788d9edecaa80"
	I0821 11:04:54.490939 2740506 cri.go:89] found id: ""
	I0821 11:04:54.490965 2740506 logs.go:284] 1 containers: [f05a8dcc4ed32441d3886e012ec4b57ea5149ad9279182a34c8788d9edecaa80]
	I0821 11:04:54.491046 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:04:54.496349 2740506 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0821 11:04:54.496469 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0821 11:04:54.554990 2740506 cri.go:89] found id: "da1aa61726154ba00afa1906d84454fa99d1291d023aee6fb2cb4eb5fe156ff4"
	I0821 11:04:54.555062 2740506 cri.go:89] found id: ""
	I0821 11:04:54.555083 2740506 logs.go:284] 1 containers: [da1aa61726154ba00afa1906d84454fa99d1291d023aee6fb2cb4eb5fe156ff4]
	I0821 11:04:54.555176 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:04:54.560556 2740506 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0821 11:04:54.560693 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0821 11:04:54.611962 2740506 cri.go:89] found id: "2f45118b518de48285b26e81d211404293caedb1b87911046f526f82b6cf40bc"
	I0821 11:04:54.612031 2740506 cri.go:89] found id: ""
	I0821 11:04:54.612052 2740506 logs.go:284] 1 containers: [2f45118b518de48285b26e81d211404293caedb1b87911046f526f82b6cf40bc]
	I0821 11:04:54.612138 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:04:54.617393 2740506 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0821 11:04:54.617459 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0821 11:04:54.671150 2740506 cri.go:89] found id: "fd62ade93bd47e62c2992f2879b7f9a3174f9aff70f88ee58f6b6fe2604f54d4"
	I0821 11:04:54.671169 2740506 cri.go:89] found id: ""
	I0821 11:04:54.671177 2740506 logs.go:284] 1 containers: [fd62ade93bd47e62c2992f2879b7f9a3174f9aff70f88ee58f6b6fe2604f54d4]
	I0821 11:04:54.671230 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:04:54.675831 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:54.682708 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:54.685910 2740506 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0821 11:04:54.686020 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0821 11:04:54.749308 2740506 cri.go:89] found id: "135c173990081138d57a5c8c0d7e2b8c5c0b52a9a84d5cb1d9de43de7729cc01"
	I0821 11:04:54.749374 2740506 cri.go:89] found id: ""
	I0821 11:04:54.749395 2740506 logs.go:284] 1 containers: [135c173990081138d57a5c8c0d7e2b8c5c0b52a9a84d5cb1d9de43de7729cc01]
	I0821 11:04:54.749484 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:04:54.754713 2740506 logs.go:123] Gathering logs for kube-controller-manager [fd62ade93bd47e62c2992f2879b7f9a3174f9aff70f88ee58f6b6fe2604f54d4] ...
	I0821 11:04:54.754784 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd62ade93bd47e62c2992f2879b7f9a3174f9aff70f88ee58f6b6fe2604f54d4"
	I0821 11:04:54.840865 2740506 logs.go:123] Gathering logs for kubelet ...
	I0821 11:04:54.840939 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0821 11:04:54.913779 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0821 11:04:54.937417 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: W0821 11:04:12.094169    1365 reflector.go:533] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-664125" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-664125' and this object
	W0821 11:04:54.937741 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.094211    1365 reflector.go:148] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-664125" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-664125' and this object
	W0821 11:04:54.939244 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: W0821 11:04:12.123082    1365 reflector.go:533] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-664125" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:04:54.939488 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.123132    1365 reflector.go:148] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-664125" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:04:54.939723 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: W0821 11:04:12.123312    1365 reflector.go:533] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:04:54.939985 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.123339    1365 reflector.go:148] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:04:54.940206 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: W0821 11:04:12.123706    1365 reflector.go:533] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-664125' and this object
	W0821 11:04:54.941161 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.123747    1365 reflector.go:148] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-664125' and this object
	I0821 11:04:54.992700 2740506 logs.go:123] Gathering logs for dmesg ...
	I0821 11:04:54.992740 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0821 11:04:55.020391 2740506 logs.go:123] Gathering logs for kube-apiserver [6cbd4e54615cf75c28330ae999f7fca6dc25bac4cba64478a4f26b16ee3870ab] ...
	I0821 11:04:55.020437 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cbd4e54615cf75c28330ae999f7fca6dc25bac4cba64478a4f26b16ee3870ab"
	I0821 11:04:55.105905 2740506 logs.go:123] Gathering logs for etcd [1875b24e0acf054bd167979cd03c603ca7e36ab44d8f67e5d1c361bb8fa371b8] ...
	I0821 11:04:55.105948 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1875b24e0acf054bd167979cd03c603ca7e36ab44d8f67e5d1c361bb8fa371b8"
	I0821 11:04:55.202434 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:55.206918 2740506 logs.go:123] Gathering logs for coredns [f05a8dcc4ed32441d3886e012ec4b57ea5149ad9279182a34c8788d9edecaa80] ...
	I0821 11:04:55.206967 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f05a8dcc4ed32441d3886e012ec4b57ea5149ad9279182a34c8788d9edecaa80"
	I0821 11:04:55.209690 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:55.259176 2740506 logs.go:123] Gathering logs for kube-scheduler [da1aa61726154ba00afa1906d84454fa99d1291d023aee6fb2cb4eb5fe156ff4] ...
	I0821 11:04:55.259252 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da1aa61726154ba00afa1906d84454fa99d1291d023aee6fb2cb4eb5fe156ff4"
	I0821 11:04:55.320284 2740506 logs.go:123] Gathering logs for describe nodes ...
	I0821 11:04:55.320365 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0821 11:04:55.412976 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:55.463996 2740506 logs.go:123] Gathering logs for kube-proxy [2f45118b518de48285b26e81d211404293caedb1b87911046f526f82b6cf40bc] ...
	I0821 11:04:55.464026 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f45118b518de48285b26e81d211404293caedb1b87911046f526f82b6cf40bc"
	I0821 11:04:55.510876 2740506 logs.go:123] Gathering logs for kindnet [135c173990081138d57a5c8c0d7e2b8c5c0b52a9a84d5cb1d9de43de7729cc01] ...
	I0821 11:04:55.510905 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 135c173990081138d57a5c8c0d7e2b8c5c0b52a9a84d5cb1d9de43de7729cc01"
	I0821 11:04:55.554431 2740506 logs.go:123] Gathering logs for CRI-O ...
	I0821 11:04:55.554460 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0821 11:04:55.643318 2740506 logs.go:123] Gathering logs for container status ...
	I0821 11:04:55.643355 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0821 11:04:55.675102 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:55.684715 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:55.720021 2740506 out.go:309] Setting ErrFile to fd 2...
	I0821 11:04:55.720631 2740506 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0821 11:04:55.720711 2740506 out.go:239] X Problems detected in kubelet:
	W0821 11:04:55.720753 2740506 out.go:239]   Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.123132    1365 reflector.go:148] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-664125" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:04:55.720908 2740506 out.go:239]   Aug 21 11:04:12 addons-664125 kubelet[1365]: W0821 11:04:12.123312    1365 reflector.go:533] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:04:55.720943 2740506 out.go:239]   Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.123339    1365 reflector.go:148] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:04:55.720975 2740506 out.go:239]   Aug 21 11:04:12 addons-664125 kubelet[1365]: W0821 11:04:12.123706    1365 reflector.go:533] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-664125' and this object
	W0821 11:04:55.721017 2740506 out.go:239]   Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.123747    1365 reflector.go:148] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-664125' and this object
	I0821 11:04:55.721046 2740506 out.go:309] Setting ErrFile to fd 2...
	I0821 11:04:55.721066 2740506 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:04:55.913567 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:56.174347 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:56.182635 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:56.412995 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:56.675406 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:56.683845 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:56.915718 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:57.174157 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:57.183492 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:57.413990 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:57.673323 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0821 11:04:57.682806 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:57.913465 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:58.174323 2740506 kapi.go:107] duration metric: took 1m14.534283323s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0821 11:04:58.183206 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:58.413808 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:58.683019 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:58.914116 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:59.183006 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:59.413687 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:04:59.683240 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:04:59.912689 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:00.183921 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:00.413793 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:00.683240 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:00.912825 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:01.183361 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:01.412851 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:01.682960 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:01.913593 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:02.183477 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:02.413573 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:02.682351 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:02.913697 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:03.184736 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:03.413642 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:03.682556 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:03.913179 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:04.183325 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:04.413700 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:04.682242 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:04.913041 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:05.184880 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:05.413854 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:05.682451 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:05.722703 2740506 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0821 11:05:05.731719 2740506 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0821 11:05:05.732936 2740506 api_server.go:141] control plane version: v1.27.4
	I0821 11:05:05.732955 2740506 api_server.go:131] duration metric: took 11.481640525s to wait for apiserver health ...
	I0821 11:05:05.732964 2740506 system_pods.go:43] waiting for kube-system pods to appear ...
	I0821 11:05:05.732983 2740506 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0821 11:05:05.733047 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0821 11:05:05.776976 2740506 cri.go:89] found id: "6cbd4e54615cf75c28330ae999f7fca6dc25bac4cba64478a4f26b16ee3870ab"
	I0821 11:05:05.777036 2740506 cri.go:89] found id: ""
	I0821 11:05:05.777056 2740506 logs.go:284] 1 containers: [6cbd4e54615cf75c28330ae999f7fca6dc25bac4cba64478a4f26b16ee3870ab]
	I0821 11:05:05.777134 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:05:05.782049 2740506 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0821 11:05:05.782155 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0821 11:05:05.836890 2740506 cri.go:89] found id: "1875b24e0acf054bd167979cd03c603ca7e36ab44d8f67e5d1c361bb8fa371b8"
	I0821 11:05:05.836963 2740506 cri.go:89] found id: ""
	I0821 11:05:05.836989 2740506 logs.go:284] 1 containers: [1875b24e0acf054bd167979cd03c603ca7e36ab44d8f67e5d1c361bb8fa371b8]
	I0821 11:05:05.837073 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:05:05.842531 2740506 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0821 11:05:05.842640 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0821 11:05:05.887121 2740506 cri.go:89] found id: "f05a8dcc4ed32441d3886e012ec4b57ea5149ad9279182a34c8788d9edecaa80"
	I0821 11:05:05.887142 2740506 cri.go:89] found id: ""
	I0821 11:05:05.887149 2740506 logs.go:284] 1 containers: [f05a8dcc4ed32441d3886e012ec4b57ea5149ad9279182a34c8788d9edecaa80]
	I0821 11:05:05.887233 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:05:05.891781 2740506 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0821 11:05:05.891857 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0821 11:05:05.913503 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:05.942103 2740506 cri.go:89] found id: "da1aa61726154ba00afa1906d84454fa99d1291d023aee6fb2cb4eb5fe156ff4"
	I0821 11:05:05.942127 2740506 cri.go:89] found id: ""
	I0821 11:05:05.942134 2740506 logs.go:284] 1 containers: [da1aa61726154ba00afa1906d84454fa99d1291d023aee6fb2cb4eb5fe156ff4]
	I0821 11:05:05.942190 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:05:05.946711 2740506 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0821 11:05:05.946807 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0821 11:05:06.000291 2740506 cri.go:89] found id: "2f45118b518de48285b26e81d211404293caedb1b87911046f526f82b6cf40bc"
	I0821 11:05:06.000315 2740506 cri.go:89] found id: ""
	I0821 11:05:06.000323 2740506 logs.go:284] 1 containers: [2f45118b518de48285b26e81d211404293caedb1b87911046f526f82b6cf40bc]
	I0821 11:05:06.000412 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:05:06.007221 2740506 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0821 11:05:06.007330 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0821 11:05:06.052596 2740506 cri.go:89] found id: "fd62ade93bd47e62c2992f2879b7f9a3174f9aff70f88ee58f6b6fe2604f54d4"
	I0821 11:05:06.052670 2740506 cri.go:89] found id: ""
	I0821 11:05:06.052691 2740506 logs.go:284] 1 containers: [fd62ade93bd47e62c2992f2879b7f9a3174f9aff70f88ee58f6b6fe2604f54d4]
	I0821 11:05:06.052772 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:05:06.057538 2740506 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0821 11:05:06.057614 2740506 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0821 11:05:06.112046 2740506 cri.go:89] found id: "135c173990081138d57a5c8c0d7e2b8c5c0b52a9a84d5cb1d9de43de7729cc01"
	I0821 11:05:06.112072 2740506 cri.go:89] found id: ""
	I0821 11:05:06.112080 2740506 logs.go:284] 1 containers: [135c173990081138d57a5c8c0d7e2b8c5c0b52a9a84d5cb1d9de43de7729cc01]
	I0821 11:05:06.112168 2740506 ssh_runner.go:195] Run: which crictl
	I0821 11:05:06.117602 2740506 logs.go:123] Gathering logs for coredns [f05a8dcc4ed32441d3886e012ec4b57ea5149ad9279182a34c8788d9edecaa80] ...
	I0821 11:05:06.117628 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f05a8dcc4ed32441d3886e012ec4b57ea5149ad9279182a34c8788d9edecaa80"
	I0821 11:05:06.164588 2740506 logs.go:123] Gathering logs for kube-proxy [2f45118b518de48285b26e81d211404293caedb1b87911046f526f82b6cf40bc] ...
	I0821 11:05:06.164614 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2f45118b518de48285b26e81d211404293caedb1b87911046f526f82b6cf40bc"
	I0821 11:05:06.183350 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:06.217401 2740506 logs.go:123] Gathering logs for dmesg ...
	I0821 11:05:06.217428 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0821 11:05:06.238656 2740506 logs.go:123] Gathering logs for describe nodes ...
	I0821 11:05:06.238690 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0821 11:05:06.391183 2740506 logs.go:123] Gathering logs for kube-apiserver [6cbd4e54615cf75c28330ae999f7fca6dc25bac4cba64478a4f26b16ee3870ab] ...
	I0821 11:05:06.391215 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6cbd4e54615cf75c28330ae999f7fca6dc25bac4cba64478a4f26b16ee3870ab"
	I0821 11:05:06.414381 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:06.455292 2740506 logs.go:123] Gathering logs for etcd [1875b24e0acf054bd167979cd03c603ca7e36ab44d8f67e5d1c361bb8fa371b8] ...
	I0821 11:05:06.455332 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1875b24e0acf054bd167979cd03c603ca7e36ab44d8f67e5d1c361bb8fa371b8"
	I0821 11:05:06.532962 2740506 logs.go:123] Gathering logs for CRI-O ...
	I0821 11:05:06.532995 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0821 11:05:06.626063 2740506 logs.go:123] Gathering logs for container status ...
	I0821 11:05:06.626098 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0821 11:05:06.683609 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:06.684882 2740506 logs.go:123] Gathering logs for kubelet ...
	I0821 11:05:06.684908 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0821 11:05:06.746353 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: W0821 11:04:12.094169    1365 reflector.go:533] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-664125" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-664125' and this object
	W0821 11:05:06.746576 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.094211    1365 reflector.go:148] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-664125" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-664125' and this object
	W0821 11:05:06.747449 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: W0821 11:04:12.123082    1365 reflector.go:533] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-664125" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:05:06.747653 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.123132    1365 reflector.go:148] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-664125" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:05:06.747831 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: W0821 11:04:12.123312    1365 reflector.go:533] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:05:06.748025 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.123339    1365 reflector.go:148] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:05:06.748214 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: W0821 11:04:12.123706    1365 reflector.go:533] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-664125' and this object
	W0821 11:05:06.748422 2740506 logs.go:138] Found kubelet problem: Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.123747    1365 reflector.go:148] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-664125' and this object
	I0821 11:05:06.780826 2740506 logs.go:123] Gathering logs for kube-scheduler [da1aa61726154ba00afa1906d84454fa99d1291d023aee6fb2cb4eb5fe156ff4] ...
	I0821 11:05:06.780852 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da1aa61726154ba00afa1906d84454fa99d1291d023aee6fb2cb4eb5fe156ff4"
	I0821 11:05:06.835064 2740506 logs.go:123] Gathering logs for kube-controller-manager [fd62ade93bd47e62c2992f2879b7f9a3174f9aff70f88ee58f6b6fe2604f54d4] ...
	I0821 11:05:06.835098 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd62ade93bd47e62c2992f2879b7f9a3174f9aff70f88ee58f6b6fe2604f54d4"
	I0821 11:05:06.902492 2740506 logs.go:123] Gathering logs for kindnet [135c173990081138d57a5c8c0d7e2b8c5c0b52a9a84d5cb1d9de43de7729cc01] ...
	I0821 11:05:06.902527 2740506 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 135c173990081138d57a5c8c0d7e2b8c5c0b52a9a84d5cb1d9de43de7729cc01"
	I0821 11:05:06.913392 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:06.955201 2740506 out.go:309] Setting ErrFile to fd 2...
	I0821 11:05:06.955227 2740506 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0821 11:05:06.955275 2740506 out.go:239] X Problems detected in kubelet:
	W0821 11:05:06.955288 2740506 out.go:239]   Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.123132    1365 reflector.go:148] object-"gcp-auth"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-664125" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:05:06.955296 2740506 out.go:239]   Aug 21 11:04:12 addons-664125 kubelet[1365]: W0821 11:04:12.123312    1365 reflector.go:533] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:05:06.955302 2740506 out.go:239]   Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.123339    1365 reflector.go:148] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-664125' and this object
	W0821 11:05:06.955311 2740506 out.go:239]   Aug 21 11:04:12 addons-664125 kubelet[1365]: W0821 11:04:12.123706    1365 reflector.go:533] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-664125' and this object
	W0821 11:05:06.955318 2740506 out.go:239]   Aug 21 11:04:12 addons-664125 kubelet[1365]: E0821 11:04:12.123747    1365 reflector.go:148] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-664125" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-664125' and this object
	I0821 11:05:06.955330 2740506 out.go:309] Setting ErrFile to fd 2...
	I0821 11:05:06.955336 2740506 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:05:07.183234 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:07.413411 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:07.682843 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:07.913339 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:08.182345 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:08.413102 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:08.682749 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:08.913178 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:09.183018 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:09.413768 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:09.682927 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:09.913639 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:10.182195 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:10.413643 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:10.682528 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:10.913051 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:11.185069 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:11.412784 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:11.682833 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:11.913353 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:12.182449 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:12.413029 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:12.683281 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:12.912828 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:13.182403 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:13.413214 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:13.682547 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:13.913587 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:14.182780 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:14.413626 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:14.682904 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:14.913543 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:15.182887 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:15.413627 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:15.683160 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:15.913094 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:16.182450 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:16.412954 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:16.682605 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:16.915502 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:16.968787 2740506 system_pods.go:59] 17 kube-system pods found
	I0821 11:05:16.968831 2740506 system_pods.go:61] "coredns-5d78c9869d-qkb55" [d246a168-735b-4140-ab63-dae68fec3caa] Running
	I0821 11:05:16.968838 2740506 system_pods.go:61] "csi-hostpath-attacher-0" [707ba412-184f-453a-b992-533bdb7939b6] Running
	I0821 11:05:16.968877 2740506 system_pods.go:61] "csi-hostpath-resizer-0" [360ae8d7-9efe-415f-96db-859eeb90ee17] Running
	I0821 11:05:16.968884 2740506 system_pods.go:61] "csi-hostpathplugin-bgq4d" [8aae0234-c71d-4782-a2c5-75f8de29c365] Running
	I0821 11:05:16.968894 2740506 system_pods.go:61] "etcd-addons-664125" [3dbc6f72-b3d6-4f71-b145-9e3df0d94c0c] Running
	I0821 11:05:16.968903 2740506 system_pods.go:61] "kindnet-cq5jr" [d76e95ff-e8a7-42de-bf74-c55031529bea] Running
	I0821 11:05:16.968908 2740506 system_pods.go:61] "kube-apiserver-addons-664125" [daef5753-b19d-472f-9dc8-bcb78d22b8d1] Running
	I0821 11:05:16.968914 2740506 system_pods.go:61] "kube-controller-manager-addons-664125" [bc1f9c7b-b58e-4223-81a9-dcc02077e1c8] Running
	I0821 11:05:16.968941 2740506 system_pods.go:61] "kube-ingress-dns-minikube" [196932ce-9b51-4def-8924-33dd9283854e] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0821 11:05:16.968954 2740506 system_pods.go:61] "kube-proxy-l8g45" [429a43db-f186-4809-b523-4330a0870d51] Running
	I0821 11:05:16.968960 2740506 system_pods.go:61] "kube-scheduler-addons-664125" [38266344-3376-4991-93de-aabea082ecb7] Running
	I0821 11:05:16.968968 2740506 system_pods.go:61] "metrics-server-7746886d4f-prk24" [9b054fcf-f0c3-405d-bdb4-e0cce366a51c] Running
	I0821 11:05:16.968981 2740506 system_pods.go:61] "registry-proxy-ngqhd" [4b1b47d2-6796-4b8a-97ae-2699b8f2d4af] Running
	I0821 11:05:16.968986 2740506 system_pods.go:61] "registry-t9w8c" [2aaee73c-950c-479b-a2ea-af5439687b4f] Running
	I0821 11:05:16.968991 2740506 system_pods.go:61] "snapshot-controller-75bbb956b9-g2kh7" [e6199496-f1a0-43ca-978f-6eccc15b225a] Running
	I0821 11:05:16.968999 2740506 system_pods.go:61] "snapshot-controller-75bbb956b9-slf4r" [3a83ba93-62da-403f-ab61-01f1d7606963] Running
	I0821 11:05:16.969004 2740506 system_pods.go:61] "storage-provisioner" [cc414d93-52b6-4b2d-ace9-5f78ad03ae30] Running
	I0821 11:05:16.969010 2740506 system_pods.go:74] duration metric: took 11.236041696s to wait for pod list to return data ...
	I0821 11:05:16.969023 2740506 default_sa.go:34] waiting for default service account to be created ...
	I0821 11:05:16.979754 2740506 default_sa.go:45] found service account: "default"
	I0821 11:05:16.979779 2740506 default_sa.go:55] duration metric: took 10.749275ms for default service account to be created ...
	I0821 11:05:16.979790 2740506 system_pods.go:116] waiting for k8s-apps to be running ...
	I0821 11:05:16.991613 2740506 system_pods.go:86] 17 kube-system pods found
	I0821 11:05:16.991654 2740506 system_pods.go:89] "coredns-5d78c9869d-qkb55" [d246a168-735b-4140-ab63-dae68fec3caa] Running
	I0821 11:05:16.991663 2740506 system_pods.go:89] "csi-hostpath-attacher-0" [707ba412-184f-453a-b992-533bdb7939b6] Running
	I0821 11:05:16.991693 2740506 system_pods.go:89] "csi-hostpath-resizer-0" [360ae8d7-9efe-415f-96db-859eeb90ee17] Running
	I0821 11:05:16.991709 2740506 system_pods.go:89] "csi-hostpathplugin-bgq4d" [8aae0234-c71d-4782-a2c5-75f8de29c365] Running
	I0821 11:05:16.991715 2740506 system_pods.go:89] "etcd-addons-664125" [3dbc6f72-b3d6-4f71-b145-9e3df0d94c0c] Running
	I0821 11:05:16.991724 2740506 system_pods.go:89] "kindnet-cq5jr" [d76e95ff-e8a7-42de-bf74-c55031529bea] Running
	I0821 11:05:16.991732 2740506 system_pods.go:89] "kube-apiserver-addons-664125" [daef5753-b19d-472f-9dc8-bcb78d22b8d1] Running
	I0821 11:05:16.991741 2740506 system_pods.go:89] "kube-controller-manager-addons-664125" [bc1f9c7b-b58e-4223-81a9-dcc02077e1c8] Running
	I0821 11:05:16.991750 2740506 system_pods.go:89] "kube-ingress-dns-minikube" [196932ce-9b51-4def-8924-33dd9283854e] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0821 11:05:16.991763 2740506 system_pods.go:89] "kube-proxy-l8g45" [429a43db-f186-4809-b523-4330a0870d51] Running
	I0821 11:05:16.991772 2740506 system_pods.go:89] "kube-scheduler-addons-664125" [38266344-3376-4991-93de-aabea082ecb7] Running
	I0821 11:05:16.991777 2740506 system_pods.go:89] "metrics-server-7746886d4f-prk24" [9b054fcf-f0c3-405d-bdb4-e0cce366a51c] Running
	I0821 11:05:16.991782 2740506 system_pods.go:89] "registry-proxy-ngqhd" [4b1b47d2-6796-4b8a-97ae-2699b8f2d4af] Running
	I0821 11:05:16.991789 2740506 system_pods.go:89] "registry-t9w8c" [2aaee73c-950c-479b-a2ea-af5439687b4f] Running
	I0821 11:05:16.991794 2740506 system_pods.go:89] "snapshot-controller-75bbb956b9-g2kh7" [e6199496-f1a0-43ca-978f-6eccc15b225a] Running
	I0821 11:05:16.991799 2740506 system_pods.go:89] "snapshot-controller-75bbb956b9-slf4r" [3a83ba93-62da-403f-ab61-01f1d7606963] Running
	I0821 11:05:16.991803 2740506 system_pods.go:89] "storage-provisioner" [cc414d93-52b6-4b2d-ace9-5f78ad03ae30] Running
	I0821 11:05:16.991809 2740506 system_pods.go:126] duration metric: took 12.014398ms to wait for k8s-apps to be running ...
	I0821 11:05:16.991817 2740506 system_svc.go:44] waiting for kubelet service to be running ....
	I0821 11:05:16.991876 2740506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 11:05:17.008712 2740506 system_svc.go:56] duration metric: took 16.886038ms WaitForService to wait for kubelet.
	I0821 11:05:17.008740 2740506 kubeadm.go:581] duration metric: took 1m39.326796052s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0821 11:05:17.008761 2740506 node_conditions.go:102] verifying NodePressure condition ...
	I0821 11:05:17.012407 2740506 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0821 11:05:17.012440 2740506 node_conditions.go:123] node cpu capacity is 2
	I0821 11:05:17.012452 2740506 node_conditions.go:105] duration metric: took 3.686154ms to run NodePressure ...
	I0821 11:05:17.012463 2740506 start.go:228] waiting for startup goroutines ...
	I0821 11:05:17.183534 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:17.413538 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:17.683701 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:17.913474 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:18.185636 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:18.422651 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:18.684083 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:18.913618 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:19.209565 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:19.413541 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:19.683530 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:19.913601 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:20.184251 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:20.413074 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:20.682508 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:20.913500 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:21.183059 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:21.413817 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:21.683066 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:21.913969 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:22.183772 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:22.431645 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:22.683282 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:22.912942 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:23.194005 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:23.413394 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:23.684991 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:23.915821 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:24.182748 2740506 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0821 11:05:24.413307 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:24.683152 2740506 kapi.go:107] duration metric: took 1m41.53130111s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0821 11:05:24.913029 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:25.414870 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:25.914205 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:26.414138 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:26.912955 2740506 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0821 11:05:27.413154 2740506 kapi.go:107] duration metric: took 1m39.532379645s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0821 11:05:27.415447 2740506 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-664125 cluster.
	I0821 11:05:27.417594 2740506 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0821 11:05:27.419370 2740506 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0821 11:05:27.421501 2740506 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, default-storageclass, storage-provisioner, metrics-server, inspektor-gadget, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0821 11:05:27.424054 2740506 addons.go:502] enable addons completed in 1m50.033553533s: enabled=[cloud-spanner ingress-dns default-storageclass storage-provisioner metrics-server inspektor-gadget volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0821 11:05:27.424104 2740506 start.go:233] waiting for cluster config update ...
	I0821 11:05:27.424122 2740506 start.go:242] writing updated cluster config ...
	I0821 11:05:27.424419 2740506 ssh_runner.go:195] Run: rm -f paused
	I0821 11:05:27.793462 2740506 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0821 11:05:27.805687 2740506 out.go:177] * Done! kubectl is now configured to use "addons-664125" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Aug 21 11:08:26 addons-664125 crio[889]: time="2023-08-21 11:08:26.309359018Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=454584b9-7d63-4244-8fe9-dfbfa4462d0e name=/runtime.v1.ImageService/ImageStatus
	Aug 21 11:08:26 addons-664125 crio[889]: time="2023-08-21 11:08:26.309535072Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=454584b9-7d63-4244-8fe9-dfbfa4462d0e name=/runtime.v1.ImageService/ImageStatus
	Aug 21 11:08:26 addons-664125 crio[889]: time="2023-08-21 11:08:26.310365537Z" level=info msg="Creating container: default/hello-world-app-65bdb79f98-nj58q/hello-world-app" id=10e2aa2b-7734-4e78-8a7c-7f41289b2a43 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 21 11:08:26 addons-664125 crio[889]: time="2023-08-21 11:08:26.310468665Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 21 11:08:26 addons-664125 crio[889]: time="2023-08-21 11:08:26.416856069Z" level=info msg="Created container 2d4c621f63d28170df41955a8409b9186436541b40ec6bb418d1fedce97cd491: default/hello-world-app-65bdb79f98-nj58q/hello-world-app" id=10e2aa2b-7734-4e78-8a7c-7f41289b2a43 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 21 11:08:26 addons-664125 crio[889]: time="2023-08-21 11:08:26.419462031Z" level=info msg="Starting container: 2d4c621f63d28170df41955a8409b9186436541b40ec6bb418d1fedce97cd491" id=79cabfac-a7af-4bbb-92dc-40c3932eb629 name=/runtime.v1.RuntimeService/StartContainer
	Aug 21 11:08:26 addons-664125 crio[889]: time="2023-08-21 11:08:26.452968113Z" level=info msg="Started container" PID=8567 containerID=2d4c621f63d28170df41955a8409b9186436541b40ec6bb418d1fedce97cd491 description=default/hello-world-app-65bdb79f98-nj58q/hello-world-app id=79cabfac-a7af-4bbb-92dc-40c3932eb629 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fc8723b5d01de5636fd6286b86a1fdc7fba2d5708101e95fccc280b6fa3c48cf
	Aug 21 11:08:26 addons-664125 conmon[8551]: conmon 2d4c621f63d28170df41 <ninfo>: container 8567 exited with status 1
	Aug 21 11:08:26 addons-664125 crio[889]: time="2023-08-21 11:08:26.697647137Z" level=info msg="Stopping container: cbe50098af1bbe5c26d3543d260fa3f9cb091ed13b4db30e679c2e0bdb02d91a (timeout: 1s)" id=c091174d-10e8-4629-a086-7b0bc312695f name=/runtime.v1.RuntimeService/StopContainer
	Aug 21 11:08:26 addons-664125 crio[889]: time="2023-08-21 11:08:26.898661194Z" level=info msg="Removing container: c14312e3095b22b71f92d7440b4641a66837f956f55d29b7e46285ca685d2d29" id=a82e702a-a8ad-46c9-8a3e-40865276e5e2 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 21 11:08:26 addons-664125 crio[889]: time="2023-08-21 11:08:26.947123531Z" level=info msg="Removed container c14312e3095b22b71f92d7440b4641a66837f956f55d29b7e46285ca685d2d29: default/hello-world-app-65bdb79f98-nj58q/hello-world-app" id=a82e702a-a8ad-46c9-8a3e-40865276e5e2 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 21 11:08:27 addons-664125 crio[889]: time="2023-08-21 11:08:27.714638413Z" level=warning msg="Stopping container cbe50098af1bbe5c26d3543d260fa3f9cb091ed13b4db30e679c2e0bdb02d91a with stop signal timed out: timeout reached after 1 seconds waiting for container process to exit" id=c091174d-10e8-4629-a086-7b0bc312695f name=/runtime.v1.RuntimeService/StopContainer
	Aug 21 11:08:27 addons-664125 conmon[5394]: conmon cbe50098af1bbe5c26d3 <ninfo>: container 5405 exited with status 137
	Aug 21 11:08:27 addons-664125 crio[889]: time="2023-08-21 11:08:27.879201100Z" level=info msg="Stopped container cbe50098af1bbe5c26d3543d260fa3f9cb091ed13b4db30e679c2e0bdb02d91a: ingress-nginx/ingress-nginx-controller-7799c6795f-dzws2/controller" id=c091174d-10e8-4629-a086-7b0bc312695f name=/runtime.v1.RuntimeService/StopContainer
	Aug 21 11:08:27 addons-664125 crio[889]: time="2023-08-21 11:08:27.879847365Z" level=info msg="Stopping pod sandbox: 1e620f8c73b6d48a86bc14ea756ce5cb4c081e15e7769244a5669df1583c0786" id=d01842fb-7f94-4f06-a07c-87f5a0819d4a name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 21 11:08:27 addons-664125 crio[889]: time="2023-08-21 11:08:27.883556165Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-3WYJ7LIXP23U6UZR - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-TGDGRGJR2YBT5KXQ - [0:0]\n-X KUBE-HP-TGDGRGJR2YBT5KXQ\n-X KUBE-HP-3WYJ7LIXP23U6UZR\nCOMMIT\n"
	Aug 21 11:08:27 addons-664125 crio[889]: time="2023-08-21 11:08:27.885190823Z" level=info msg="Closing host port tcp:80"
	Aug 21 11:08:27 addons-664125 crio[889]: time="2023-08-21 11:08:27.885236024Z" level=info msg="Closing host port tcp:443"
	Aug 21 11:08:27 addons-664125 crio[889]: time="2023-08-21 11:08:27.887082141Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 21 11:08:27 addons-664125 crio[889]: time="2023-08-21 11:08:27.887115437Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 21 11:08:27 addons-664125 crio[889]: time="2023-08-21 11:08:27.887277461Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7799c6795f-dzws2 Namespace:ingress-nginx ID:1e620f8c73b6d48a86bc14ea756ce5cb4c081e15e7769244a5669df1583c0786 UID:b5c493d3-86c7-4adb-8d97-27fcc534dc72 NetNS:/var/run/netns/dfeca420-895d-4b06-a5f9-c5a42f0a270f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 21 11:08:27 addons-664125 crio[889]: time="2023-08-21 11:08:27.887429277Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7799c6795f-dzws2 from CNI network \"kindnet\" (type=ptp)"
	Aug 21 11:08:27 addons-664125 crio[889]: time="2023-08-21 11:08:27.907158935Z" level=info msg="Stopped pod sandbox: 1e620f8c73b6d48a86bc14ea756ce5cb4c081e15e7769244a5669df1583c0786" id=d01842fb-7f94-4f06-a07c-87f5a0819d4a name=/runtime.v1.RuntimeService/StopPodSandbox
	Aug 21 11:08:28 addons-664125 crio[889]: time="2023-08-21 11:08:28.910446303Z" level=info msg="Removing container: cbe50098af1bbe5c26d3543d260fa3f9cb091ed13b4db30e679c2e0bdb02d91a" id=5080b518-683a-45a7-a7a5-f88c9948b202 name=/runtime.v1.RuntimeService/RemoveContainer
	Aug 21 11:08:28 addons-664125 crio[889]: time="2023-08-21 11:08:28.927730332Z" level=info msg="Removed container cbe50098af1bbe5c26d3543d260fa3f9cb091ed13b4db30e679c2e0bdb02d91a: ingress-nginx/ingress-nginx-controller-7799c6795f-dzws2/controller" id=5080b518-683a-45a7-a7a5-f88c9948b202 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2d4c621f63d28       13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5                                                             8 seconds ago       Exited              hello-world-app           2                   fc8723b5d01de       hello-world-app-65bdb79f98-nj58q
	7944fc46e2cb0       docker.io/library/nginx@sha256:cac882be2b7305e0c8d3e3cd0575a2fd58f5fde6dd5d6299605aa0f3e67ca385                              2 minutes ago       Running             nginx                     0                   ef6aedfc7f278       nginx
	e4ed5f797502e       ghcr.io/headlamp-k8s/headlamp@sha256:498ea22dc5acadaa4015e7a50335d21fdce45d9e8f1f8adf29c2777da4182f98                        2 minutes ago       Running             headlamp                  0                   f7da99b8a2d0f       headlamp-5c78f74d8d-wj9pg
	65cfe8f6fcd87       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 3 minutes ago       Running             gcp-auth                  0                   f4f2d556bf499       gcp-auth-58478865f7-c62lq
	d522ef1f98fc6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   3 minutes ago       Exited              patch                     0                   9813bc1d66230       ingress-nginx-admission-patch-5sthn
	af168565fc5bd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   3 minutes ago       Exited              create                    0                   721a9d62f889e       ingress-nginx-admission-create-l8675
	f05a8dcc4ed32       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             4 minutes ago       Running             coredns                   0                   6e831ba592bcd       coredns-5d78c9869d-qkb55
	f9d716105465b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago       Running             storage-provisioner       0                   85df6f39b692d       storage-provisioner
	135c173990081       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                                             4 minutes ago       Running             kindnet-cni               0                   458b6ff35d51e       kindnet-cq5jr
	2f45118b518de       532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317                                                             4 minutes ago       Running             kube-proxy                0                   16841fa4025a4       kube-proxy-l8g45
	fd62ade93bd47       389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2                                                             5 minutes ago       Running             kube-controller-manager   0                   9453bf79b31d8       kube-controller-manager-addons-664125
	1875b24e0acf0       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737                                                             5 minutes ago       Running             etcd                      0                   c3b025e2c8cc2       etcd-addons-664125
	da1aa61726154       6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085                                                             5 minutes ago       Running             kube-scheduler            0                   e038102b88a76       kube-scheduler-addons-664125
	6cbd4e54615cf       64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388                                                             5 minutes ago       Running             kube-apiserver            0                   d5e0862ec9c10       kube-apiserver-addons-664125
	
	* 
	* ==> coredns [f05a8dcc4ed32441d3886e012ec4b57ea5149ad9279182a34c8788d9edecaa80] <==
	* [INFO] 10.244.0.16:55149 - 33512 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057385s
	[INFO] 10.244.0.16:55149 - 14720 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001843385s
	[INFO] 10.244.0.16:33683 - 49318 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002386824s
	[INFO] 10.244.0.16:33683 - 64190 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00172868s
	[INFO] 10.244.0.16:55149 - 35638 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001976584s
	[INFO] 10.244.0.16:33683 - 47618 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000184735s
	[INFO] 10.244.0.16:55149 - 39790 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000130147s
	[INFO] 10.244.0.16:36257 - 25353 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000112047s
	[INFO] 10.244.0.16:54445 - 65023 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000073156s
	[INFO] 10.244.0.16:36257 - 61571 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000115608s
	[INFO] 10.244.0.16:54445 - 54509 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000124954s
	[INFO] 10.244.0.16:36257 - 6417 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000109388s
	[INFO] 10.244.0.16:54445 - 29505 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000238937s
	[INFO] 10.244.0.16:54445 - 32279 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000046087s
	[INFO] 10.244.0.16:36257 - 29490 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000165831s
	[INFO] 10.244.0.16:54445 - 6720 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000150742s
	[INFO] 10.244.0.16:36257 - 16465 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000189058s
	[INFO] 10.244.0.16:54445 - 5437 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048909s
	[INFO] 10.244.0.16:36257 - 62676 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056524s
	[INFO] 10.244.0.16:36257 - 43032 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001729s
	[INFO] 10.244.0.16:54445 - 25225 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001289649s
	[INFO] 10.244.0.16:36257 - 57613 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001095018s
	[INFO] 10.244.0.16:54445 - 35427 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001120331s
	[INFO] 10.244.0.16:36257 - 698 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000070226s
	[INFO] 10.244.0.16:54445 - 13007 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000032442s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-664125
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-664125
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43
	                    minikube.k8s.io/name=addons-664125
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_21T11_03_25_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-664125
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 21 Aug 2023 11:03:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-664125
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 21 Aug 2023 11:08:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 21 Aug 2023 11:08:30 +0000   Mon, 21 Aug 2023 11:03:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 21 Aug 2023 11:08:30 +0000   Mon, 21 Aug 2023 11:03:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 21 Aug 2023 11:08:30 +0000   Mon, 21 Aug 2023 11:03:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 21 Aug 2023 11:08:30 +0000   Mon, 21 Aug 2023 11:04:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-664125
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 5bf2313ddf4c4aafa060858dc135deeb
	  System UUID:                da84d2bc-0163-4caf-9150-fba3b0ef16c5
	  Boot ID:                    02e315f4-a354-4b0b-b564-f929fd2e643c
	  Kernel Version:             5.15.0-1041-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-65bdb79f98-nj58q         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  gcp-auth                    gcp-auth-58478865f7-c62lq                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  headlamp                    headlamp-5c78f74d8d-wj9pg                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m
	  kube-system                 coredns-5d78c9869d-qkb55                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m57s
	  kube-system                 etcd-addons-664125                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m11s
	  kube-system                 kindnet-cq5jr                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m57s
	  kube-system                 kube-apiserver-addons-664125             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 kube-controller-manager-addons-664125    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kube-system                 kube-proxy-l8g45                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-scheduler-addons-664125             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m19s (x8 over 5m19s)  kubelet          Node addons-664125 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m19s (x8 over 5m19s)  kubelet          Node addons-664125 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m19s (x8 over 5m19s)  kubelet          Node addons-664125 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m11s                  kubelet          Node addons-664125 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m11s                  kubelet          Node addons-664125 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m11s                  kubelet          Node addons-664125 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m59s                  node-controller  Node addons-664125 event: Registered Node addons-664125 in Controller
	  Normal  NodeReady                4m23s                  kubelet          Node addons-664125 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001029] FS-Cache: O-key=[8] 'f1495c0100000000'
	[  +0.000750] FS-Cache: N-cookie c=000000c0 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000955] FS-Cache: N-cookie d=00000000128a3fc5{9p.inode} n=00000000886b3137
	[  +0.001035] FS-Cache: N-key=[8] 'f1495c0100000000'
	[  +0.002930] FS-Cache: Duplicate cookie detected
	[  +0.000708] FS-Cache: O-cookie c=000000ba [p=000000b7 fl=226 nc=0 na=1]
	[  +0.000943] FS-Cache: O-cookie d=00000000128a3fc5{9p.inode} n=000000004d9cbc41
	[  +0.001075] FS-Cache: O-key=[8] 'f1495c0100000000'
	[  +0.000713] FS-Cache: N-cookie c=000000c1 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000931] FS-Cache: N-cookie d=00000000128a3fc5{9p.inode} n=00000000939f1609
	[  +0.001034] FS-Cache: N-key=[8] 'f1495c0100000000'
	[  +2.998600] FS-Cache: Duplicate cookie detected
	[  +0.000714] FS-Cache: O-cookie c=000000b8 [p=000000b7 fl=226 nc=0 na=1]
	[  +0.000956] FS-Cache: O-cookie d=00000000128a3fc5{9p.inode} n=00000000fa5b2717
	[  +0.001066] FS-Cache: O-key=[8] 'f0495c0100000000'
	[  +0.000698] FS-Cache: N-cookie c=000000c3 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000928] FS-Cache: N-cookie d=00000000128a3fc5{9p.inode} n=00000000df9e778a
	[  +0.001040] FS-Cache: N-key=[8] 'f0495c0100000000'
	[  +0.339333] FS-Cache: Duplicate cookie detected
	[  +0.000761] FS-Cache: O-cookie c=000000bd [p=000000b7 fl=226 nc=0 na=1]
	[  +0.000955] FS-Cache: O-cookie d=00000000128a3fc5{9p.inode} n=000000000adb5282
	[  +0.001160] FS-Cache: O-key=[8] 'f6495c0100000000'
	[  +0.000710] FS-Cache: N-cookie c=000000c4 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000938] FS-Cache: N-cookie d=00000000128a3fc5{9p.inode} n=00000000886b3137
	[  +0.001050] FS-Cache: N-key=[8] 'f6495c0100000000'
	
	* 
	* ==> etcd [1875b24e0acf054bd167979cd03c603ca7e36ab44d8f67e5d1c361bb8fa371b8] <==
	* {"level":"info","ts":"2023-08-21T11:03:39.663Z","caller":"traceutil/trace.go:171","msg":"trace[793825220] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:378; }","duration":"159.013154ms","start":"2023-08-21T11:03:39.504Z","end":"2023-08-21T11:03:39.663Z","steps":["trace[793825220] 'agreement among raft nodes before linearized reading'  (duration: 158.005822ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-21T11:03:39.949Z","caller":"traceutil/trace.go:171","msg":"trace[1915133426] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"106.635369ms","start":"2023-08-21T11:03:39.842Z","end":"2023-08-21T11:03:39.949Z","steps":["trace[1915133426] 'process raft request'  (duration: 106.482075ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-21T11:03:40.039Z","caller":"traceutil/trace.go:171","msg":"trace[1090070692] linearizableReadLoop","detail":"{readStateIndex:395; appliedIndex:392; }","duration":"102.462964ms","start":"2023-08-21T11:03:39.936Z","end":"2023-08-21T11:03:40.039Z","steps":["trace[1090070692] 'read index received'  (duration: 12.039079ms)","trace[1090070692] 'applied index is now lower than readState.Index'  (duration: 90.423335ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-21T11:03:40.039Z","caller":"traceutil/trace.go:171","msg":"trace[464088450] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"179.767259ms","start":"2023-08-21T11:03:39.859Z","end":"2023-08-21T11:03:40.039Z","steps":["trace[464088450] 'process raft request'  (duration: 105.178775ms)","trace[464088450] 'compare'  (duration: 74.13703ms)"],"step_count":2}
	{"level":"info","ts":"2023-08-21T11:03:40.039Z","caller":"traceutil/trace.go:171","msg":"trace[1990804288] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"103.487846ms","start":"2023-08-21T11:03:39.936Z","end":"2023-08-21T11:03:40.039Z","steps":["trace[1990804288] 'process raft request'  (duration: 102.844158ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-21T11:03:40.042Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.180809ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-5d78c9869d\" ","response":"range_response_count:1 size:3635"}
	{"level":"info","ts":"2023-08-21T11:03:40.043Z","caller":"traceutil/trace.go:171","msg":"trace[575085503] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-5d78c9869d; range_end:; response_count:1; response_revision:382; }","duration":"106.730071ms","start":"2023-08-21T11:03:39.936Z","end":"2023-08-21T11:03:40.043Z","steps":["trace[575085503] 'agreement among raft nodes before linearized reading'  (duration: 105.125885ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-21T11:03:40.042Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.690764ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" ","response":"range_response_count:1 size:4619"}
	{"level":"info","ts":"2023-08-21T11:03:40.044Z","caller":"traceutil/trace.go:171","msg":"trace[149510012] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:382; }","duration":"107.096324ms","start":"2023-08-21T11:03:39.936Z","end":"2023-08-21T11:03:40.044Z","steps":["trace[149510012] 'agreement among raft nodes before linearized reading'  (duration: 105.653071ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-21T11:03:40.059Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.500265ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/\" range_end:\"/registry/serviceaccounts/kube-system0\" ","response":"range_response_count:36 size:7511"}
	{"level":"info","ts":"2023-08-21T11:03:40.059Z","caller":"traceutil/trace.go:171","msg":"trace[296255003] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/; range_end:/registry/serviceaccounts/kube-system0; response_count:36; response_revision:382; }","duration":"117.11054ms","start":"2023-08-21T11:03:39.942Z","end":"2023-08-21T11:03:40.059Z","steps":["trace[296255003] 'agreement among raft nodes before linearized reading'  (duration: 104.353043ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-21T11:03:40.554Z","caller":"traceutil/trace.go:171","msg":"trace[831649092] linearizableReadLoop","detail":"{readStateIndex:396; appliedIndex:396; }","duration":"131.513715ms","start":"2023-08-21T11:03:40.422Z","end":"2023-08-21T11:03:40.554Z","steps":["trace[831649092] 'read index received'  (duration: 131.506634ms)","trace[831649092] 'applied index is now lower than readState.Index'  (duration: 5.908µs)"],"step_count":2}
	{"level":"warn","ts":"2023-08-21T11:03:40.558Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.778361ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-21T11:03:40.558Z","caller":"traceutil/trace.go:171","msg":"trace[1265023102] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:0; response_revision:383; }","duration":"135.920536ms","start":"2023-08-21T11:03:40.422Z","end":"2023-08-21T11:03:40.558Z","steps":["trace[1265023102] 'agreement among raft nodes before linearized reading'  (duration: 131.597578ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-21T11:03:40.614Z","caller":"traceutil/trace.go:171","msg":"trace[455163169] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"191.803843ms","start":"2023-08-21T11:03:40.422Z","end":"2023-08-21T11:03:40.614Z","steps":["trace[455163169] 'process raft request'  (duration: 131.991605ms)","trace[455163169] 'compare'  (duration: 57.930348ms)"],"step_count":2}
	{"level":"warn","ts":"2023-08-21T11:03:41.489Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.417311ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-21T11:03:41.491Z","caller":"traceutil/trace.go:171","msg":"trace[607504760] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:417; }","duration":"104.074485ms","start":"2023-08-21T11:03:41.387Z","end":"2023-08-21T11:03:41.491Z","steps":["trace[607504760] 'agreement among raft nodes before linearized reading'  (duration: 102.403166ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-21T11:03:41.491Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.374814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-5d78c9869d\" ","response":"range_response_count:1 size:3797"}
	{"level":"info","ts":"2023-08-21T11:03:41.491Z","caller":"traceutil/trace.go:171","msg":"trace[943004693] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-5d78c9869d; range_end:; response_count:1; response_revision:417; }","duration":"110.5313ms","start":"2023-08-21T11:03:41.381Z","end":"2023-08-21T11:03:41.491Z","steps":["trace[943004693] 'agreement among raft nodes before linearized reading'  (duration: 110.122627ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-21T11:03:41.505Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.089787ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-08-21T11:03:41.505Z","caller":"traceutil/trace.go:171","msg":"trace[925144838] range","detail":"{range_begin:/registry/services/specs/kube-system/registry; range_end:; response_count:0; response_revision:417; }","duration":"124.429664ms","start":"2023-08-21T11:03:41.381Z","end":"2023-08-21T11:03:41.505Z","steps":["trace[925144838] 'agreement among raft nodes before linearized reading'  (duration: 111.069028ms)"],"step_count":1}
	{"level":"warn","ts":"2023-08-21T11:03:41.491Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.395233ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-664125\" ","response":"range_response_count:1 size:5743"}
	{"level":"warn","ts":"2023-08-21T11:03:41.506Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.202309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:1 size:3142"}
	{"level":"info","ts":"2023-08-21T11:03:41.510Z","caller":"traceutil/trace.go:171","msg":"trace[1069431024] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:1; response_revision:417; }","duration":"129.64912ms","start":"2023-08-21T11:03:41.381Z","end":"2023-08-21T11:03:41.510Z","steps":["trace[1069431024] 'agreement among raft nodes before linearized reading'  (duration: 125.116755ms)"],"step_count":1}
	{"level":"info","ts":"2023-08-21T11:03:41.510Z","caller":"traceutil/trace.go:171","msg":"trace[538262388] range","detail":"{range_begin:/registry/minions/addons-664125; range_end:; response_count:1; response_revision:417; }","duration":"124.036318ms","start":"2023-08-21T11:03:41.386Z","end":"2023-08-21T11:03:41.510Z","steps":["trace[538262388] 'agreement among raft nodes before linearized reading'  (duration: 104.292351ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [65cfe8f6fcd8776d59f814c9bec4e03ad0418e8a1a9870a9141dd749ea6eb3ca] <==
	* 2023/08/21 11:05:26 GCP Auth Webhook started!
	2023/08/21 11:05:35 Ready to marshal response ...
	2023/08/21 11:05:35 Ready to write response ...
	2023/08/21 11:05:35 Ready to marshal response ...
	2023/08/21 11:05:35 Ready to write response ...
	2023/08/21 11:05:35 Ready to marshal response ...
	2023/08/21 11:05:35 Ready to write response ...
	2023/08/21 11:05:38 Ready to marshal response ...
	2023/08/21 11:05:38 Ready to write response ...
	2023/08/21 11:05:47 Ready to marshal response ...
	2023/08/21 11:05:47 Ready to write response ...
	2023/08/21 11:05:54 Ready to marshal response ...
	2023/08/21 11:05:54 Ready to write response ...
	2023/08/21 11:06:22 Ready to marshal response ...
	2023/08/21 11:06:22 Ready to write response ...
	2023/08/21 11:08:08 Ready to marshal response ...
	2023/08/21 11:08:08 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  11:08:35 up 19:50,  0 users,  load average: 1.43, 1.67, 1.98
	Linux addons-664125 5.15.0-1041-aws #46~20.04.1-Ubuntu SMP Wed Jul 19 15:39:29 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [135c173990081138d57a5c8c0d7e2b8c5c0b52a9a84d5cb1d9de43de7729cc01] <==
	* I0821 11:06:32.047642       1 main.go:227] handling current node
	I0821 11:06:42.052306       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:06:42.052584       1 main.go:227] handling current node
	I0821 11:06:52.063941       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:06:52.063972       1 main.go:227] handling current node
	I0821 11:07:02.074132       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:07:02.074161       1 main.go:227] handling current node
	I0821 11:07:12.078184       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:07:12.078210       1 main.go:227] handling current node
	I0821 11:07:22.091035       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:07:22.091061       1 main.go:227] handling current node
	I0821 11:07:32.103675       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:07:32.103706       1 main.go:227] handling current node
	I0821 11:07:42.109060       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:07:42.109091       1 main.go:227] handling current node
	I0821 11:07:52.119066       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:07:52.119095       1 main.go:227] handling current node
	I0821 11:08:02.131508       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:08:02.131538       1 main.go:227] handling current node
	I0821 11:08:12.139484       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:08:12.139907       1 main.go:227] handling current node
	I0821 11:08:22.144278       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:08:22.144307       1 main.go:227] handling current node
	I0821 11:08:32.154882       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:08:32.154911       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [6cbd4e54615cf75c28330ae999f7fca6dc25bac4cba64478a4f26b16ee3870ab] <==
	* I0821 11:06:38.524385       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 11:06:38.533106       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 11:06:38.534042       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 11:06:38.541527       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 11:06:38.541672       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0821 11:06:38.545505       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0821 11:06:38.545559       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0821 11:06:39.534638       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0821 11:06:39.548751       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0821 11:06:39.570326       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0821 11:06:48.486731       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0821 11:06:48.486768       1 handler_proxy.go:100] no RequestInfo found in the context
	E0821 11:06:48.486805       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0821 11:06:48.486814       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0821 11:06:48.510235       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0821 11:06:50.156259       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0821 11:06:50.170456       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0821 11:06:51.200058       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0821 11:07:48.487161       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0821 11:07:48.487184       1 handler_proxy.go:100] no RequestInfo found in the context
	E0821 11:07:48.487220       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0821 11:07:48.487229       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0821 11:08:08.857503       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs=map[IPv4:10.111.149.252]
	
	* 
	* ==> kube-controller-manager [fd62ade93bd47e62c2992f2879b7f9a3174f9aff70f88ee58f6b6fe2604f54d4] <==
	* I0821 11:07:07.444449       1 shared_informer.go:318] Caches are synced for garbage collector
	W0821 11:07:09.451995       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0821 11:07:09.452030       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0821 11:07:12.129458       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0821 11:07:12.129491       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0821 11:07:16.670637       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0821 11:07:16.670758       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0821 11:07:19.070153       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0821 11:07:19.070185       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0821 11:07:32.151453       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0821 11:07:32.151497       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0821 11:07:46.787697       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0821 11:07:46.787731       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0821 11:08:04.884457       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0821 11:08:04.884572       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0821 11:08:07.235250       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0821 11:08:07.235283       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0821 11:08:08.572487       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-65bdb79f98 to 1"
	I0821 11:08:08.616201       1 event.go:307] "Event occurred" object="default/hello-world-app-65bdb79f98" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-65bdb79f98-nj58q"
	W0821 11:08:12.947947       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0821 11:08:12.947981       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0821 11:08:26.675827       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-create
	I0821 11:08:26.687937       1 job_controller.go:523] enqueueing job ingress-nginx/ingress-nginx-admission-patch
	W0821 11:08:27.321528       1 reflector.go:533] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0821 11:08:27.321561       1 reflector.go:148] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [2f45118b518de48285b26e81d211404293caedb1b87911046f526f82b6cf40bc] <==
	* I0821 11:03:42.735155       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0821 11:03:42.735647       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0821 11:03:42.735731       1 server_others.go:554] "Using iptables proxy"
	I0821 11:03:42.797399       1 server_others.go:192] "Using iptables Proxier"
	I0821 11:03:42.797432       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0821 11:03:42.797441       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0821 11:03:42.797458       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0821 11:03:42.797523       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0821 11:03:42.798064       1 server.go:658] "Version info" version="v1.27.4"
	I0821 11:03:42.798085       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 11:03:42.798947       1 config.go:188] "Starting service config controller"
	I0821 11:03:42.798996       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0821 11:03:42.799031       1 config.go:97] "Starting endpoint slice config controller"
	I0821 11:03:42.799039       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0821 11:03:42.799551       1 config.go:315] "Starting node config controller"
	I0821 11:03:42.799567       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0821 11:03:42.901459       1 shared_informer.go:318] Caches are synced for node config
	I0821 11:03:42.901500       1 shared_informer.go:318] Caches are synced for service config
	I0821 11:03:42.901517       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [da1aa61726154ba00afa1906d84454fa99d1291d023aee6fb2cb4eb5fe156ff4] <==
	* W0821 11:03:21.362798       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0821 11:03:21.362811       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0821 11:03:21.362897       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0821 11:03:21.362911       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0821 11:03:21.362970       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0821 11:03:21.362984       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0821 11:03:21.363020       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0821 11:03:21.363033       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0821 11:03:21.363599       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0821 11:03:21.363625       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0821 11:03:21.363680       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0821 11:03:21.363695       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0821 11:03:21.364969       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0821 11:03:21.364993       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0821 11:03:22.173222       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0821 11:03:22.173260       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0821 11:03:22.241206       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0821 11:03:22.241242       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0821 11:03:22.312611       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 11:03:22.312648       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 11:03:22.332676       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0821 11:03:22.332711       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0821 11:03:22.450715       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0821 11:03:22.450752       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0821 11:03:24.342064       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Aug 21 11:08:24 addons-664125 kubelet[1365]: E0821 11:08:24.470882    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ec0b4f7c1158fb76227e21a82406b0b4f36e551ba560b3b5bc29dfccf0358827/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ec0b4f7c1158fb76227e21a82406b0b4f36e551ba560b3b5bc29dfccf0358827/diff: no such file or directory, extraDiskErr: <nil>
	Aug 21 11:08:24 addons-664125 kubelet[1365]: E0821 11:08:24.489542    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/4d1d60c494a1635db996e34edc7df36f64c84805670442ead6a891a874cb0a75/diff" to get inode usage: stat /var/lib/containers/storage/overlay/4d1d60c494a1635db996e34edc7df36f64c84805670442ead6a891a874cb0a75/diff: no such file or directory, extraDiskErr: <nil>
	Aug 21 11:08:24 addons-664125 kubelet[1365]: E0821 11:08:24.508799    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ec0b4f7c1158fb76227e21a82406b0b4f36e551ba560b3b5bc29dfccf0358827/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ec0b4f7c1158fb76227e21a82406b0b4f36e551ba560b3b5bc29dfccf0358827/diff: no such file or directory, extraDiskErr: <nil>
	Aug 21 11:08:24 addons-664125 kubelet[1365]: E0821 11:08:24.525729    1365 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/4d1d60c494a1635db996e34edc7df36f64c84805670442ead6a891a874cb0a75/diff" to get inode usage: stat /var/lib/containers/storage/overlay/4d1d60c494a1635db996e34edc7df36f64c84805670442ead6a891a874cb0a75/diff: no such file or directory, extraDiskErr: <nil>
	Aug 21 11:08:24 addons-664125 kubelet[1365]: I0821 11:08:24.997115    1365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blcqp\" (UniqueName: \"kubernetes.io/projected/196932ce-9b51-4def-8924-33dd9283854e-kube-api-access-blcqp\") pod \"196932ce-9b51-4def-8924-33dd9283854e\" (UID: \"196932ce-9b51-4def-8924-33dd9283854e\") "
	Aug 21 11:08:24 addons-664125 kubelet[1365]: I0821 11:08:24.999535    1365 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/196932ce-9b51-4def-8924-33dd9283854e-kube-api-access-blcqp" (OuterVolumeSpecName: "kube-api-access-blcqp") pod "196932ce-9b51-4def-8924-33dd9283854e" (UID: "196932ce-9b51-4def-8924-33dd9283854e"). InnerVolumeSpecName "kube-api-access-blcqp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 21 11:08:25 addons-664125 kubelet[1365]: I0821 11:08:25.098058    1365 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-blcqp\" (UniqueName: \"kubernetes.io/projected/196932ce-9b51-4def-8924-33dd9283854e-kube-api-access-blcqp\") on node \"addons-664125\" DevicePath \"\""
	Aug 21 11:08:25 addons-664125 kubelet[1365]: I0821 11:08:25.891942    1365 scope.go:115] "RemoveContainer" containerID="f46e81dbfb36ce0bd232464971f44f5ffcc50bd807d43cf990b963c9cc78cb06"
	Aug 21 11:08:26 addons-664125 kubelet[1365]: I0821 11:08:26.307386    1365 scope.go:115] "RemoveContainer" containerID="c14312e3095b22b71f92d7440b4641a66837f956f55d29b7e46285ca685d2d29"
	Aug 21 11:08:26 addons-664125 kubelet[1365]: I0821 11:08:26.308984    1365 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=196932ce-9b51-4def-8924-33dd9283854e path="/var/lib/kubelet/pods/196932ce-9b51-4def-8924-33dd9283854e/volumes"
	Aug 21 11:08:26 addons-664125 kubelet[1365]: E0821 11:08:26.703157    1365 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7799c6795f-dzws2.177d616194c84753", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-dzws2", UID:"b5c493d3-86c7-4adb-8d97-27fcc534dc72", APIVersion:"v1", ResourceVersion:"758", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stopping container controller", Source:v1.EventSource{Componen
t:"kubelet", Host:"addons-664125"}, FirstTimestamp:time.Date(2023, time.August, 21, 11, 8, 26, 697049939, time.Local), LastTimestamp:time.Date(2023, time.August, 21, 11, 8, 26, 697049939, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7799c6795f-dzws2.177d616194c84753" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 21 11:08:26 addons-664125 kubelet[1365]: I0821 11:08:26.896613    1365 scope.go:115] "RemoveContainer" containerID="c14312e3095b22b71f92d7440b4641a66837f956f55d29b7e46285ca685d2d29"
	Aug 21 11:08:26 addons-664125 kubelet[1365]: I0821 11:08:26.896860    1365 scope.go:115] "RemoveContainer" containerID="2d4c621f63d28170df41955a8409b9186436541b40ec6bb418d1fedce97cd491"
	Aug 21 11:08:26 addons-664125 kubelet[1365]: E0821 11:08:26.897122    1365 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-65bdb79f98-nj58q_default(ca42b571-23fd-483e-8cfa-179be506f620)\"" pod="default/hello-world-app-65bdb79f98-nj58q" podUID=ca42b571-23fd-483e-8cfa-179be506f620
	Aug 21 11:08:26 addons-664125 kubelet[1365]: E0821 11:08:26.916864    1365 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7799c6795f-dzws2.177d6161a174e01e", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7799c6795f-dzws2", UID:"b5c493d3-86c7-4adb-8d97-27fcc534dc72", APIVersion:"v1", ResourceVersion:"758", FieldPath:"spec.containers{controller}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 5
00", Source:v1.EventSource{Component:"kubelet", Host:"addons-664125"}, FirstTimestamp:time.Date(2023, time.August, 21, 11, 8, 26, 909687838, time.Local), LastTimestamp:time.Date(2023, time.August, 21, 11, 8, 26, 909687838, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7799c6795f-dzws2.177d6161a174e01e" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 21 11:08:28 addons-664125 kubelet[1365]: I0821 11:08:28.017783    1365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqqn8\" (UniqueName: \"kubernetes.io/projected/b5c493d3-86c7-4adb-8d97-27fcc534dc72-kube-api-access-nqqn8\") pod \"b5c493d3-86c7-4adb-8d97-27fcc534dc72\" (UID: \"b5c493d3-86c7-4adb-8d97-27fcc534dc72\") "
	Aug 21 11:08:28 addons-664125 kubelet[1365]: I0821 11:08:28.018357    1365 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5c493d3-86c7-4adb-8d97-27fcc534dc72-webhook-cert\") pod \"b5c493d3-86c7-4adb-8d97-27fcc534dc72\" (UID: \"b5c493d3-86c7-4adb-8d97-27fcc534dc72\") "
	Aug 21 11:08:28 addons-664125 kubelet[1365]: I0821 11:08:28.020371    1365 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5c493d3-86c7-4adb-8d97-27fcc534dc72-kube-api-access-nqqn8" (OuterVolumeSpecName: "kube-api-access-nqqn8") pod "b5c493d3-86c7-4adb-8d97-27fcc534dc72" (UID: "b5c493d3-86c7-4adb-8d97-27fcc534dc72"). InnerVolumeSpecName "kube-api-access-nqqn8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 21 11:08:28 addons-664125 kubelet[1365]: I0821 11:08:28.021955    1365 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5c493d3-86c7-4adb-8d97-27fcc534dc72-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "b5c493d3-86c7-4adb-8d97-27fcc534dc72" (UID: "b5c493d3-86c7-4adb-8d97-27fcc534dc72"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 21 11:08:28 addons-664125 kubelet[1365]: I0821 11:08:28.119189    1365 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b5c493d3-86c7-4adb-8d97-27fcc534dc72-webhook-cert\") on node \"addons-664125\" DevicePath \"\""
	Aug 21 11:08:28 addons-664125 kubelet[1365]: I0821 11:08:28.119231    1365 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nqqn8\" (UniqueName: \"kubernetes.io/projected/b5c493d3-86c7-4adb-8d97-27fcc534dc72-kube-api-access-nqqn8\") on node \"addons-664125\" DevicePath \"\""
	Aug 21 11:08:28 addons-664125 kubelet[1365]: I0821 11:08:28.309168    1365 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=b1939ab9-9f8e-4b84-a59b-3ed65e67b268 path="/var/lib/kubelet/pods/b1939ab9-9f8e-4b84-a59b-3ed65e67b268/volumes"
	Aug 21 11:08:28 addons-664125 kubelet[1365]: I0821 11:08:28.309593    1365 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=b5c493d3-86c7-4adb-8d97-27fcc534dc72 path="/var/lib/kubelet/pods/b5c493d3-86c7-4adb-8d97-27fcc534dc72/volumes"
	Aug 21 11:08:28 addons-664125 kubelet[1365]: I0821 11:08:28.309991    1365 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=d3e537b3-a3d1-4f35-97df-71b0b061551d path="/var/lib/kubelet/pods/d3e537b3-a3d1-4f35-97df-71b0b061551d/volumes"
	Aug 21 11:08:28 addons-664125 kubelet[1365]: I0821 11:08:28.909230    1365 scope.go:115] "RemoveContainer" containerID="cbe50098af1bbe5c26d3543d260fa3f9cb091ed13b4db30e679c2e0bdb02d91a"
	
	* 
	* ==> storage-provisioner [f9d716105465b913ca01a686ea080f735e58aa2c0a449fc88baff7a0a6999b52] <==
	* I0821 11:04:12.899083       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0821 11:04:12.941561       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0821 11:04:12.941655       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0821 11:04:12.950180       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0821 11:04:12.961303       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-664125_add2235c-7f29-431c-9068-7feca40db3e9!
	I0821 11:04:12.950795       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eb06c641-334b-4c6c-9648-7cdc9dcb931b", APIVersion:"v1", ResourceVersion:"826", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-664125_add2235c-7f29-431c-9068-7feca40db3e9 became leader
	I0821 11:04:13.070871       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-664125_add2235c-7f29-431c-9068-7feca40db3e9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-664125 -n addons-664125
helpers_test.go:261: (dbg) Run:  kubectl --context addons-664125 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (170.13s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (5.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-723696 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-723696 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-723696 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-723696 --alsologtostderr -v=1] stderr:
I0821 11:13:28.988816 2766711 out.go:296] Setting OutFile to fd 1 ...
I0821 11:13:28.989967 2766711 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 11:13:28.989996 2766711 out.go:309] Setting ErrFile to fd 2...
I0821 11:13:28.990013 2766711 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 11:13:28.990304 2766711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
I0821 11:13:28.990611 2766711 mustload.go:65] Loading cluster: functional-723696
I0821 11:13:28.990995 2766711 config.go:182] Loaded profile config "functional-723696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 11:13:28.991478 2766711 cli_runner.go:164] Run: docker container inspect functional-723696 --format={{.State.Status}}
I0821 11:13:29.012019 2766711 host.go:66] Checking if "functional-723696" exists ...
I0821 11:13:29.012354 2766711 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0821 11:13:29.096363 2766711 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-08-21 11:13:29.086349981 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
I0821 11:13:29.096498 2766711 api_server.go:166] Checking apiserver status ...
I0821 11:13:29.096557 2766711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0821 11:13:29.096599 2766711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-723696
I0821 11:13:29.114585 2766711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36198 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/functional-723696/id_rsa Username:docker}
I0821 11:13:29.215051 2766711 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4581/cgroup
I0821 11:13:29.226265 2766711 api_server.go:182] apiserver freezer: "5:freezer:/docker/916385f92e57258ec5895ffd12b5f3e5ba86ba06905a77c6d5d2a5e3f925537b/crio/crio-7c0e2518bf333c086dce219f0cd2e1f74476006944d5f203b91e5b085f48f709"
I0821 11:13:29.226335 2766711 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/916385f92e57258ec5895ffd12b5f3e5ba86ba06905a77c6d5d2a5e3f925537b/crio/crio-7c0e2518bf333c086dce219f0cd2e1f74476006944d5f203b91e5b085f48f709/freezer.state
I0821 11:13:29.236728 2766711 api_server.go:204] freezer state: "THAWED"
I0821 11:13:29.236754 2766711 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0821 11:13:29.246068 2766711 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0821 11:13:29.246106 2766711 out.go:239] * Enabling dashboard ...
* Enabling dashboard ...
I0821 11:13:29.246294 2766711 config.go:182] Loaded profile config "functional-723696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 11:13:29.246326 2766711 addons.go:69] Setting dashboard=true in profile "functional-723696"
I0821 11:13:29.246338 2766711 addons.go:231] Setting addon dashboard=true in "functional-723696"
I0821 11:13:29.246363 2766711 host.go:66] Checking if "functional-723696" exists ...
I0821 11:13:29.246772 2766711 cli_runner.go:164] Run: docker container inspect functional-723696 --format={{.State.Status}}
I0821 11:13:29.272794 2766711 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0821 11:13:29.275300 2766711 out.go:177]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0821 11:13:29.278155 2766711 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0821 11:13:29.278176 2766711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0821 11:13:29.278245 2766711 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-723696
I0821 11:13:29.300200 2766711 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36198 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/functional-723696/id_rsa Username:docker}
I0821 11:13:29.406668 2766711 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0821 11:13:29.406690 2766711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0821 11:13:29.429708 2766711 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0821 11:13:29.429728 2766711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0821 11:13:29.451068 2766711 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0821 11:13:29.451092 2766711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0821 11:13:29.471617 2766711 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0821 11:13:29.471643 2766711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0821 11:13:29.493551 2766711 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
I0821 11:13:29.493573 2766711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0821 11:13:29.514911 2766711 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0821 11:13:29.514935 2766711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0821 11:13:29.535735 2766711 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0821 11:13:29.535757 2766711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0821 11:13:29.556477 2766711 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0821 11:13:29.556501 2766711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0821 11:13:29.576957 2766711 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0821 11:13:29.576979 2766711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0821 11:13:29.598912 2766711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0821 11:13:30.603917 2766711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.004965245s)
I0821 11:13:30.606232 2766711 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-723696 addons enable metrics-server	

                                                
                                                

                                                
                                                
I0821 11:13:30.608275 2766711 addons.go:194] Writing out "functional-723696" config to set dashboard=true...
W0821 11:13:30.608575 2766711 out.go:239] * Verifying dashboard health ...
* Verifying dashboard health ...
I0821 11:13:30.609359 2766711 kapi.go:59] client config for functional-723696: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt", KeyFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.key", CAFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1721b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0821 11:13:30.631018 2766711 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  f7ebc3e0-30cf-4e22-bd65-3c35c6951a34 886 0 2023-08-21 11:13:30 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2023-08-21 11:13:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.102.43.50,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.102.43.50],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0821 11:13:30.631201 2766711 out.go:239] * Launching proxy ...
* Launching proxy ...
I0821 11:13:30.631278 2766711 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-723696 proxy --port 36195]
I0821 11:13:30.631520 2766711 dashboard.go:157] Waiting for kubectl to output host:port ...
I0821 11:13:30.709925 2766711 out.go:177] 
W0821 11:13:30.711815 2766711 out.go:239] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W0821 11:13:30.711829 2766711 out.go:239] * 
* 
W0821 11:13:30.722243 2766711 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0821 11:13:30.724056 2766711 out.go:177] 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-723696
helpers_test.go:235: (dbg) docker inspect functional-723696:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "916385f92e57258ec5895ffd12b5f3e5ba86ba06905a77c6d5d2a5e3f925537b",
	        "Created": "2023-08-21T11:09:57.868233775Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2755888,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-21T11:09:58.191723804Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/916385f92e57258ec5895ffd12b5f3e5ba86ba06905a77c6d5d2a5e3f925537b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/916385f92e57258ec5895ffd12b5f3e5ba86ba06905a77c6d5d2a5e3f925537b/hostname",
	        "HostsPath": "/var/lib/docker/containers/916385f92e57258ec5895ffd12b5f3e5ba86ba06905a77c6d5d2a5e3f925537b/hosts",
	        "LogPath": "/var/lib/docker/containers/916385f92e57258ec5895ffd12b5f3e5ba86ba06905a77c6d5d2a5e3f925537b/916385f92e57258ec5895ffd12b5f3e5ba86ba06905a77c6d5d2a5e3f925537b-json.log",
	        "Name": "/functional-723696",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-723696:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-723696",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ed54b50a6fc4262804939bc8d08eb161fca094b2343ac12b0b58aa8072ad1653-init/diff:/var/lib/docker/overlay2/26861af3348249541ea382b8036362f60ea7ec122121fce2bcb8576e1879b2cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ed54b50a6fc4262804939bc8d08eb161fca094b2343ac12b0b58aa8072ad1653/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ed54b50a6fc4262804939bc8d08eb161fca094b2343ac12b0b58aa8072ad1653/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ed54b50a6fc4262804939bc8d08eb161fca094b2343ac12b0b58aa8072ad1653/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-723696",
	                "Source": "/var/lib/docker/volumes/functional-723696/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-723696",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-723696",
	                "name.minikube.sigs.k8s.io": "functional-723696",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a68b1d7d5a63c8adf38c64fe367c75ec0f8e7bcc2e2b28521d7b8b552907d663",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36198"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36197"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36194"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36196"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36195"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a68b1d7d5a63",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-723696": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "916385f92e57",
	                        "functional-723696"
	                    ],
	                    "NetworkID": "c2e88ff05f827cf8843fc54b10de906572630fbc0256ff077099e97713fbf59e",
	                    "EndpointID": "c38b9730ed7de69a48815ae9c819a163cc3fc56bced5f15ec9c1da1a0a8d1d4a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-723696 -n functional-723696
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-723696 logs -n 25: (2.740893473s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount     | -p functional-723696                                                     | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdany-port622117732/001:/mount-9p       |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh       | functional-723696 ssh findmnt                                            | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC |                     |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh       | functional-723696 ssh findmnt                                            | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh       | functional-723696 ssh -- ls                                              | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|           | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh       | functional-723696 ssh cat                                                | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|           | /mount-9p/test-1692616395118150007                                       |                   |         |         |                     |                     |
	| ssh       | functional-723696 ssh stat                                               | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|           | /mount-9p/created-by-test                                                |                   |         |         |                     |                     |
	| ssh       | functional-723696 ssh stat                                               | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|           | /mount-9p/created-by-pod                                                 |                   |         |         |                     |                     |
	| ssh       | functional-723696 ssh sudo                                               | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|           | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| ssh       | functional-723696 ssh findmnt                                            | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC |                     |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| mount     | -p functional-723696                                                     | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdspecific-port1162817450/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                      |                   |         |         |                     |                     |
	| ssh       | functional-723696 ssh findmnt                                            | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|           | -T /mount-9p | grep 9p                                                   |                   |         |         |                     |                     |
	| ssh       | functional-723696 ssh -- ls                                              | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|           | -la /mount-9p                                                            |                   |         |         |                     |                     |
	| ssh       | functional-723696 ssh sudo                                               | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC |                     |
	|           | umount -f /mount-9p                                                      |                   |         |         |                     |                     |
	| mount     | -p functional-723696                                                     | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1734868876/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount     | -p functional-723696                                                     | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1734868876/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| mount     | -p functional-723696                                                     | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC |                     |
	|           | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1734868876/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| ssh       | functional-723696 ssh findmnt                                            | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC |                     |
	|           | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh       | functional-723696 ssh findmnt                                            | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|           | -T /mount1                                                               |                   |         |         |                     |                     |
	| ssh       | functional-723696 ssh findmnt                                            | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|           | -T /mount2                                                               |                   |         |         |                     |                     |
	| ssh       | functional-723696 ssh findmnt                                            | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|           | -T /mount3                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-723696                                                     | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC |                     |
	|           | --kill=true                                                              |                   |         |         |                     |                     |
	| start     | -p functional-723696                                                     | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC |                     |
	|           | --dry-run --memory                                                       |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                  |                   |         |         |                     |                     |
	|           | --driver=docker                                                          |                   |         |         |                     |                     |
	|           | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| start     | -p functional-723696                                                     | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC |                     |
	|           | --dry-run --alsologtostderr                                              |                   |         |         |                     |                     |
	|           | -v=1 --driver=docker                                                     |                   |         |         |                     |                     |
	|           | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| start     | -p functional-723696                                                     | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC |                     |
	|           | --dry-run --memory                                                       |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                  |                   |         |         |                     |                     |
	|           | --driver=docker                                                          |                   |         |         |                     |                     |
	|           | --container-runtime=crio                                                 |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                       | functional-723696 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC |                     |
	|           | -p functional-723696                                                     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 11:13:28
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 11:13:28.790281 2766669 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:13:28.790462 2766669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:13:28.790491 2766669 out.go:309] Setting ErrFile to fd 2...
	I0821 11:13:28.790515 2766669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:13:28.792143 2766669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
	I0821 11:13:28.792572 2766669 out.go:303] Setting JSON to false
	I0821 11:13:28.793687 2766669 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":71753,"bootTime":1692544656,"procs":374,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0821 11:13:28.793800 2766669 start.go:138] virtualization:  
	I0821 11:13:28.796207 2766669 out.go:177] * [functional-723696] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	I0821 11:13:28.798659 2766669 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 11:13:28.800441 2766669 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 11:13:28.798873 2766669 notify.go:220] Checking for updates...
	I0821 11:13:28.802517 2766669 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:13:28.804468 2766669 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	I0821 11:13:28.806433 2766669 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0821 11:13:28.808496 2766669 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 11:13:28.811763 2766669 config.go:182] Loaded profile config "functional-723696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 11:13:28.812533 2766669 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 11:13:28.836065 2766669 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 11:13:28.836176 2766669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:13:28.921851 2766669 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-08-21 11:13:28.912493943 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:13:28.921964 2766669 docker.go:294] overlay module found
	I0821 11:13:28.924320 2766669 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0821 11:13:28.926337 2766669 start.go:298] selected driver: docker
	I0821 11:13:28.926357 2766669 start.go:902] validating driver "docker" against &{Name:functional-723696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-723696 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:13:28.926475 2766669 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 11:13:28.929065 2766669 out.go:177] 
	W0821 11:13:28.931037 2766669 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0821 11:13:28.933353 2766669 out.go:177] 
	
	* 
	* ==> CRI-O <==
	* Aug 21 11:13:26 functional-723696 crio[4193]: time="2023-08-21 11:13:26.240380600Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 21 11:13:26 functional-723696 crio[4193]: time="2023-08-21 11:13:26.336796578Z" level=info msg="Created container 6d715e281f45a1040308305276e05cfb5db289b296eb4b59c0de21e5e96c9bd9: default/sp-pod/myfrontend" id=e943a2d3-8d60-4197-a0f2-896ecbad35ac name=/runtime.v1.RuntimeService/CreateContainer
	Aug 21 11:13:26 functional-723696 crio[4193]: time="2023-08-21 11:13:26.339725045Z" level=info msg="Starting container: 6d715e281f45a1040308305276e05cfb5db289b296eb4b59c0de21e5e96c9bd9" id=ab2342e5-5afc-454c-83ce-4309afd086fe name=/runtime.v1.RuntimeService/StartContainer
	Aug 21 11:13:26 functional-723696 crio[4193]: time="2023-08-21 11:13:26.355958995Z" level=info msg="Started container" PID=6744 containerID=6d715e281f45a1040308305276e05cfb5db289b296eb4b59c0de21e5e96c9bd9 description=default/sp-pod/myfrontend id=ab2342e5-5afc-454c-83ce-4309afd086fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=ea884023da47ad421ebab590182e216812df5fe8f753640828da7a292d36d034
	Aug 21 11:13:31 functional-723696 crio[4193]: time="2023-08-21 11:13:31.662715049Z" level=info msg="Running pod sandbox: kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69-6qh94/POD" id=385f88e6-e12e-459a-bd1f-9dcc7cb66b96 name=/runtime.v1.RuntimeService/RunPodSandbox
	Aug 21 11:13:31 functional-723696 crio[4193]: time="2023-08-21 11:13:31.662777202Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 21 11:13:31 functional-723696 crio[4193]: time="2023-08-21 11:13:31.687515500Z" level=info msg="Got pod network &{Name:dashboard-metrics-scraper-5dd9cbfd69-6qh94 Namespace:kubernetes-dashboard ID:2a784abf0d5607ea997e53ff736512905a2a9b6130b0d1fb2b9fefd659c1c344 UID:621d1676-561a-4190-935a-5c2b53499204 NetNS:/var/run/netns/90f61304-2e21-4eab-a9ed-21801701194b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 21 11:13:31 functional-723696 crio[4193]: time="2023-08-21 11:13:31.687559757Z" level=info msg="Adding pod kubernetes-dashboard_dashboard-metrics-scraper-5dd9cbfd69-6qh94 to CNI network \"kindnet\" (type=ptp)"
	Aug 21 11:13:31 functional-723696 crio[4193]: time="2023-08-21 11:13:31.705095007Z" level=info msg="Running pod sandbox: kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747-rmknc/POD" id=34676867-70d2-4670-affc-29a31f88d012 name=/runtime.v1.RuntimeService/RunPodSandbox
	Aug 21 11:13:31 functional-723696 crio[4193]: time="2023-08-21 11:13:31.705154763Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 21 11:13:31 functional-723696 crio[4193]: time="2023-08-21 11:13:31.706467106Z" level=info msg="Got pod network &{Name:dashboard-metrics-scraper-5dd9cbfd69-6qh94 Namespace:kubernetes-dashboard ID:2a784abf0d5607ea997e53ff736512905a2a9b6130b0d1fb2b9fefd659c1c344 UID:621d1676-561a-4190-935a-5c2b53499204 NetNS:/var/run/netns/90f61304-2e21-4eab-a9ed-21801701194b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 21 11:13:31 functional-723696 crio[4193]: time="2023-08-21 11:13:31.706608289Z" level=info msg="Checking pod kubernetes-dashboard_dashboard-metrics-scraper-5dd9cbfd69-6qh94 for CNI network kindnet (type=ptp)"
	Aug 21 11:13:31 functional-723696 crio[4193]: time="2023-08-21 11:13:31.731382303Z" level=info msg="Got pod network &{Name:kubernetes-dashboard-5c5cfc8747-rmknc Namespace:kubernetes-dashboard ID:2e3f5d53e034384a0b0e2eb5ea0ab74f9c3e436e6a76865789fc91aa13042933 UID:e3f52a0c-5963-40b9-b7d9-0f51f88883f5 NetNS:/var/run/netns/d3a8fca2-f27b-4ba1-bbaf-3864e58bd486 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 21 11:13:31 functional-723696 crio[4193]: time="2023-08-21 11:13:31.731860964Z" level=info msg="Adding pod kubernetes-dashboard_kubernetes-dashboard-5c5cfc8747-rmknc to CNI network \"kindnet\" (type=ptp)"
	Aug 21 11:13:31 functional-723696 crio[4193]: time="2023-08-21 11:13:31.737834327Z" level=info msg="Ran pod sandbox 2a784abf0d5607ea997e53ff736512905a2a9b6130b0d1fb2b9fefd659c1c344 with infra container: kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69-6qh94/POD" id=385f88e6-e12e-459a-bd1f-9dcc7cb66b96 name=/runtime.v1.RuntimeService/RunPodSandbox
	Aug 21 11:13:31 functional-723696 crio[4193]: time="2023-08-21 11:13:31.755653689Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=e650b66a-fff1-4d23-b32e-17e4edc02d16 name=/runtime.v1.ImageService/ImageStatus
	Aug 21 11:13:31 functional-723696 crio[4193]: time="2023-08-21 11:13:31.755930147Z" level=info msg="Image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c not found" id=e650b66a-fff1-4d23-b32e-17e4edc02d16 name=/runtime.v1.ImageService/ImageStatus
	Aug 21 11:13:31 functional-723696 crio[4193]: time="2023-08-21 11:13:31.756795427Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=e3342270-d838-4120-94e9-1c9b77d82e96 name=/runtime.v1.ImageService/PullImage
	Aug 21 11:13:31 functional-723696 crio[4193]: time="2023-08-21 11:13:31.759936369Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Aug 21 11:13:31 functional-723696 crio[4193]: time="2023-08-21 11:13:31.778939766Z" level=info msg="Got pod network &{Name:kubernetes-dashboard-5c5cfc8747-rmknc Namespace:kubernetes-dashboard ID:2e3f5d53e034384a0b0e2eb5ea0ab74f9c3e436e6a76865789fc91aa13042933 UID:e3f52a0c-5963-40b9-b7d9-0f51f88883f5 NetNS:/var/run/netns/d3a8fca2-f27b-4ba1-bbaf-3864e58bd486 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 21 11:13:31 functional-723696 crio[4193]: time="2023-08-21 11:13:31.779094232Z" level=info msg="Checking pod kubernetes-dashboard_kubernetes-dashboard-5c5cfc8747-rmknc for CNI network kindnet (type=ptp)"
	Aug 21 11:13:31 functional-723696 crio[4193]: time="2023-08-21 11:13:31.795990159Z" level=info msg="Ran pod sandbox 2e3f5d53e034384a0b0e2eb5ea0ab74f9c3e436e6a76865789fc91aa13042933 with infra container: kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747-rmknc/POD" id=34676867-70d2-4670-affc-29a31f88d012 name=/runtime.v1.RuntimeService/RunPodSandbox
	Aug 21 11:13:31 functional-723696 crio[4193]: time="2023-08-21 11:13:31.799324591Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=695c59ef-d28b-40b0-9ac2-a6e1d34cc1e1 name=/runtime.v1.ImageService/ImageStatus
	Aug 21 11:13:31 functional-723696 crio[4193]: time="2023-08-21 11:13:31.799578494Z" level=info msg="Image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 not found" id=695c59ef-d28b-40b0-9ac2-a6e1d34cc1e1 name=/runtime.v1.ImageService/ImageStatus
	Aug 21 11:13:32 functional-723696 crio[4193]: time="2023-08-21 11:13:32.020866524Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                    CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	6d715e281f45a       docker.io/library/nginx@sha256:104c7c5c54f2685f0f46f3be607ce60da7085da3eaa5ad22d3d9f01594295e9c          5 seconds ago        Running             myfrontend                0                   ea884023da47a       sp-pod
	008cead82fc62       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e      12 seconds ago       Exited              mount-munger              0                   247f1c257ed2d       busybox-mount
	91d7f75464324       72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb                                         26 seconds ago       Running             echoserver-arm            0                   35b76fcb8d529       hello-node-connect-58d66798bb-hhhp9
	f93097c0eb1d6       docker.io/library/nginx@sha256:cac882be2b7305e0c8d3e3cd0575a2fd58f5fde6dd5d6299605aa0f3e67ca385          33 seconds ago       Running             nginx                     0                   a3b0e59462de2       nginx-svc
	815bc74dd32ab       registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5   48 seconds ago       Running             echoserver-arm            0                   556602d99d08b       hello-node-7b684b55f9-kss6r
	d7a4d7604c17f       532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317                                         About a minute ago   Running             kube-proxy                2                   954b83d029496       kube-proxy-8lgdc
	b85b6343cc2cc       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                         About a minute ago   Running             coredns                   2                   97e806ecee40b       coredns-5d78c9869d-grwvb
	37c887b6c234c       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                         About a minute ago   Running             kindnet-cni               3                   fe30a46e2f359       kindnet-nltrk
	8a08a4a8e49ef       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                         About a minute ago   Running             storage-provisioner       3                   9a4b87ca2fa16       storage-provisioner
	7c0e2518bf333       64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388                                         About a minute ago   Running             kube-apiserver            0                   95bc2e55101b2       kube-apiserver-functional-723696
	515329a7b37f6       6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085                                         About a minute ago   Running             kube-scheduler            2                   870a4727f65c9       kube-scheduler-functional-723696
	97e071464b0d7       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737                                         About a minute ago   Running             etcd                      2                   96141fdaaf156       etcd-functional-723696
	2400a35fbbf98       389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2                                         About a minute ago   Running             kube-controller-manager   2                   6e2bdd8a5b182       kube-controller-manager-functional-723696
	35060d167433c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                         About a minute ago   Exited              storage-provisioner       2                   9a4b87ca2fa16       storage-provisioner
	4bd5ee59ab9b0       532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317                                         About a minute ago   Exited              kube-proxy                1                   954b83d029496       kube-proxy-8lgdc
	809c487dfd967       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                         About a minute ago   Exited              kindnet-cni               2                   fe30a46e2f359       kindnet-nltrk
	480e502cfa8ef       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                         2 minutes ago        Exited              coredns                   1                   97e806ecee40b       coredns-5d78c9869d-grwvb
	fa94fb151b0e6       6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085                                         2 minutes ago        Exited              kube-scheduler            1                   870a4727f65c9       kube-scheduler-functional-723696
	9b58ad8da8467       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737                                         2 minutes ago        Exited              etcd                      1                   96141fdaaf156       etcd-functional-723696
	0e3d5518cbaba       389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2                                         2 minutes ago        Exited              kube-controller-manager   1                   6e2bdd8a5b182       kube-controller-manager-functional-723696
	
	* 
	* ==> coredns [480e502cfa8efbf7735638d64e1b055056ae1713bcb6f539980decd3c83423d3] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48009 - 6657 "HINFO IN 1113237143167090231.880225586470415924. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.027941975s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [b85b6343cc2cc26841a94bb667cf9ee3a55eda9f9c46bb9769606f155a7ff3e4] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52150 - 35120 "HINFO IN 8886377898870578675.1616929473606272890. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015387222s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-723696
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-723696
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43
	                    minikube.k8s.io/name=functional-723696
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_21T11_10_24_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 21 Aug 2023 11:10:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-723696
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 21 Aug 2023 11:13:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 21 Aug 2023 11:13:10 +0000   Mon, 21 Aug 2023 11:10:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 21 Aug 2023 11:13:10 +0000   Mon, 21 Aug 2023 11:10:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 21 Aug 2023 11:13:10 +0000   Mon, 21 Aug 2023 11:10:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 21 Aug 2023 11:13:10 +0000   Mon, 21 Aug 2023 11:11:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-723696
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 24c60c95c31546b1bcacaca5dbada953
	  System UUID:                e5d65639-a3b5-4346-952a-378d515c2711
	  Boot ID:                    02e315f4-a354-4b0b-b564-f929fd2e643c
	  Kernel Version:             5.15.0-1041-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-7b684b55f9-kss6r                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  default                     hello-node-connect-58d66798bb-hhhp9           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx-svc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         37s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 coredns-5d78c9869d-grwvb                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     2m57s
	  kube-system                 etcd-functional-723696                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         3m11s
	  kube-system                 kindnet-nltrk                                 100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      2m57s
	  kube-system                 kube-apiserver-functional-723696              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-controller-manager-functional-723696     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	  kube-system                 kube-proxy-8lgdc                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                 kube-scheduler-functional-723696              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	  kubernetes-dashboard        dashboard-metrics-scraper-5dd9cbfd69-6qh94    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  kubernetes-dashboard        kubernetes-dashboard-5c5cfc8747-rmknc         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m55s                  kube-proxy       
	  Normal  Starting                 82s                    kube-proxy       
	  Normal  Starting                 119s                   kube-proxy       
	  Normal  Starting                 3m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m18s (x8 over 3m18s)  kubelet          Node functional-723696 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m18s (x8 over 3m18s)  kubelet          Node functional-723696 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m18s (x8 over 3m18s)  kubelet          Node functional-723696 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  3m9s                   kubelet          Node functional-723696 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m9s                   kubelet          Node functional-723696 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m9s                   kubelet          Node functional-723696 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m9s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m57s                  node-controller  Node functional-723696 event: Registered Node functional-723696 in Controller
	  Normal  NodeReady                2m25s                  kubelet          Node functional-723696 status is now: NodeReady
	  Normal  RegisteredNode           108s                   node-controller  Node functional-723696 event: Registered Node functional-723696 in Controller
	  Normal  Starting                 90s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  89s (x8 over 89s)      kubelet          Node functional-723696 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s (x8 over 89s)      kubelet          Node functional-723696 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     89s (x8 over 89s)      kubelet          Node functional-723696 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           71s                    node-controller  Node functional-723696 event: Registered Node functional-723696 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001023] FS-Cache: O-key=[8] '9a4b5c0100000000'
	[  +0.000699] FS-Cache: N-cookie c=000000d2 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000916] FS-Cache: N-cookie d=00000000128a3fc5{9p.inode} n=00000000cd9d496d
	[  +0.001054] FS-Cache: N-key=[8] '9a4b5c0100000000'
	[  +0.002483] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=000000cc [p=000000c9 fl=226 nc=0 na=1]
	[  +0.000977] FS-Cache: O-cookie d=00000000128a3fc5{9p.inode} n=00000000aedceb4a
	[  +0.001055] FS-Cache: O-key=[8] '9a4b5c0100000000'
	[  +0.000715] FS-Cache: N-cookie c=000000d3 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000930] FS-Cache: N-cookie d=00000000128a3fc5{9p.inode} n=000000000adb5282
	[  +0.001063] FS-Cache: N-key=[8] '9a4b5c0100000000'
	[  +3.434482] FS-Cache: Duplicate cookie detected
	[  +0.000767] FS-Cache: O-cookie c=000000ca [p=000000c9 fl=226 nc=0 na=1]
	[  +0.000971] FS-Cache: O-cookie d=00000000128a3fc5{9p.inode} n=000000007708d8a6
	[  +0.001033] FS-Cache: O-key=[8] '994b5c0100000000'
	[  +0.000696] FS-Cache: N-cookie c=000000d5 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000936] FS-Cache: N-cookie d=00000000128a3fc5{9p.inode} n=000000006b8e342c
	[  +0.001064] FS-Cache: N-key=[8] '994b5c0100000000'
	[  +0.475929] FS-Cache: Duplicate cookie detected
	[  +0.000712] FS-Cache: O-cookie c=000000cf [p=000000c9 fl=226 nc=0 na=1]
	[  +0.000952] FS-Cache: O-cookie d=00000000128a3fc5{9p.inode} n=00000000939f1609
	[  +0.001035] FS-Cache: O-key=[8] '9f4b5c0100000000'
	[  +0.000728] FS-Cache: N-cookie c=000000d6 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000926] FS-Cache: N-cookie d=00000000128a3fc5{9p.inode} n=00000000fa5b2717
	[  +0.001032] FS-Cache: N-key=[8] '9f4b5c0100000000'
	
	* 
	* ==> etcd [97e071464b0d7c7ef56f80aa5b9a3fcf44991857694fbabd0fb09dae9237dabe] <==
	* {"level":"info","ts":"2023-08-21T11:12:04.025Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-21T11:12:04.025Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-08-21T11:12:04.026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-08-21T11:12:04.030Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-08-21T11:12:04.030Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T11:12:04.030Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T11:12:04.042Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-21T11:12:04.044Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-21T11:12:04.045Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-21T11:12:04.046Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-08-21T11:12:04.046Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-08-21T11:12:05.073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2023-08-21T11:12:05.074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2023-08-21T11:12:05.074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-08-21T11:12:05.074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2023-08-21T11:12:05.074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2023-08-21T11:12:05.074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2023-08-21T11:12:05.074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2023-08-21T11:12:05.080Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-723696 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-21T11:12:05.081Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T11:12:05.082Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-21T11:12:05.089Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T11:12:05.091Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-08-21T11:12:05.090Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-21T11:12:05.098Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [9b58ad8da8467988fcbdf34f7d70876ea06e5a850034457b216321bebdaceab9] <==
	* {"level":"info","ts":"2023-08-21T11:11:25.568Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-08-21T11:11:25.570Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-21T11:11:25.570Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-21T11:11:25.569Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-08-21T11:11:25.570Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-08-21T11:11:27.439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2023-08-21T11:11:27.439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2023-08-21T11:11:27.440Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-08-21T11:11:27.440Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2023-08-21T11:11:27.440Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-08-21T11:11:27.440Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2023-08-21T11:11:27.440Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-08-21T11:11:27.441Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-723696 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-21T11:11:27.442Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T11:11:27.443Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-21T11:11:27.443Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T11:11:27.444Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-08-21T11:11:27.452Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-21T11:11:27.452Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-21T11:11:51.194Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-08-21T11:11:51.194Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"functional-723696","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"info","ts":"2023-08-21T11:11:51.354Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2023-08-21T11:11:51.355Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-08-21T11:11:51.357Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-08-21T11:11:51.357Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"functional-723696","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> kernel <==
	*  11:13:32 up 19:55,  0 users,  load average: 2.35, 1.95, 2.00
	Linux functional-723696 5.15.0-1041-aws #46~20.04.1-Ubuntu SMP Wed Jul 19 15:39:29 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [37c887b6c234cf107b30141fb087acaf1506007233580932a5898160e0d6a885] <==
	* I0821 11:12:09.437682       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0821 11:12:09.437943       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0821 11:12:09.438117       1 main.go:116] setting mtu 1500 for CNI 
	I0821 11:12:09.438159       1 main.go:146] kindnetd IP family: "ipv4"
	I0821 11:12:09.438195       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0821 11:12:09.775269       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:12:09.775309       1 main.go:227] handling current node
	I0821 11:12:19.792771       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:12:19.792796       1 main.go:227] handling current node
	I0821 11:12:29.803869       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:12:29.803914       1 main.go:227] handling current node
	I0821 11:12:39.818436       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:12:39.818463       1 main.go:227] handling current node
	I0821 11:12:49.829368       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:12:49.829396       1 main.go:227] handling current node
	I0821 11:12:59.833109       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:12:59.833137       1 main.go:227] handling current node
	I0821 11:13:09.845249       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:13:09.845277       1 main.go:227] handling current node
	I0821 11:13:19.857579       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:13:19.857606       1 main.go:227] handling current node
	I0821 11:13:29.870411       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:13:29.870514       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [809c487dfd9677ab94589a1b2ba4008a73cf7f804fdac46be7a1baaf072624fb] <==
	* I0821 11:11:32.577342       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0821 11:11:32.577405       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0821 11:11:32.577542       1 main.go:116] setting mtu 1500 for CNI 
	I0821 11:11:32.577553       1 main.go:146] kindnetd IP family: "ipv4"
	I0821 11:11:32.577564       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0821 11:11:32.976144       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:11:32.976267       1 main.go:227] handling current node
	I0821 11:11:42.992664       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:11:42.992883       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [7c0e2518bf333c086dce219f0cd2e1f74476006944d5f203b91e5b085f48f709] <==
	* I0821 11:12:08.877404       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0821 11:12:08.877424       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0821 11:12:08.883829       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0821 11:12:08.883929       1 aggregator.go:152] initial CRD sync complete...
	I0821 11:12:08.883984       1 autoregister_controller.go:141] Starting autoregister controller
	I0821 11:12:08.884012       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0821 11:12:08.884043       1 cache.go:39] Caches are synced for autoregister controller
	I0821 11:12:08.899111       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0821 11:12:09.222895       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0821 11:12:09.582975       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0821 11:12:11.161309       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0821 11:12:11.288081       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0821 11:12:11.299329       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0821 11:12:11.363768       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0821 11:12:11.371119       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0821 11:12:26.895247       1 controller.go:624] quota admission added evaluator for: endpoints
	I0821 11:12:31.103299       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs=map[IPv4:10.110.147.56]
	I0821 11:12:31.130706       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0821 11:12:39.677587       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0821 11:12:39.844416       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.102.58.42]
	I0821 11:12:55.548781       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.102.212.84]
	I0821 11:13:05.341237       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.111.191.81]
	I0821 11:13:30.267278       1 controller.go:624] quota admission added evaluator for: namespaces
	I0821 11:13:30.548720       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.102.43.50]
	I0821 11:13:30.594918       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.98.195.147]
	
	* 
	* ==> kube-controller-manager [0e3d5518cbabae70479135b51516e6e93e0eeb4a9f98f731e6c8fd6a7b896600] <==
	* I0821 11:11:44.819638       1 shared_informer.go:318] Caches are synced for attach detach
	I0821 11:11:44.819689       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0821 11:11:44.824554       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0821 11:11:44.826837       1 shared_informer.go:318] Caches are synced for node
	I0821 11:11:44.826922       1 range_allocator.go:174] "Sending events to api server"
	I0821 11:11:44.826973       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0821 11:11:44.826984       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0821 11:11:44.826990       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0821 11:11:44.827055       1 shared_informer.go:318] Caches are synced for endpoint
	I0821 11:11:44.829509       1 shared_informer.go:318] Caches are synced for PVC protection
	I0821 11:11:44.830618       1 shared_informer.go:318] Caches are synced for ephemeral
	I0821 11:11:44.831738       1 shared_informer.go:318] Caches are synced for PV protection
	I0821 11:11:44.835957       1 shared_informer.go:318] Caches are synced for GC
	I0821 11:11:44.842761       1 shared_informer.go:318] Caches are synced for stateful set
	I0821 11:11:44.852905       1 shared_informer.go:318] Caches are synced for expand
	I0821 11:11:44.853942       1 shared_informer.go:318] Caches are synced for daemon sets
	I0821 11:11:44.856696       1 shared_informer.go:318] Caches are synced for HPA
	I0821 11:11:44.891553       1 shared_informer.go:318] Caches are synced for service account
	I0821 11:11:44.959156       1 shared_informer.go:318] Caches are synced for resource quota
	I0821 11:11:44.960209       1 shared_informer.go:318] Caches are synced for disruption
	I0821 11:11:44.992098       1 shared_informer.go:318] Caches are synced for deployment
	I0821 11:11:45.005221       1 shared_informer.go:318] Caches are synced for resource quota
	I0821 11:11:45.337609       1 shared_informer.go:318] Caches are synced for garbage collector
	I0821 11:11:45.337641       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0821 11:11:45.386160       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [2400a35fbbf98e55d8d04c9a08bd4d58a06a5aa99f63d7804015118699f3732d] <==
	* I0821 11:12:39.720540       1 event.go:307] "Event occurred" object="default/hello-node-7b684b55f9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-7b684b55f9-kss6r"
	I0821 11:13:04.842121       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0821 11:13:04.842534       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0821 11:13:05.175120       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-58d66798bb to 1"
	I0821 11:13:05.186918       1 event.go:307] "Event occurred" object="default/hello-node-connect-58d66798bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-58d66798bb-hhhp9"
	I0821 11:13:30.350781       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5dd9cbfd69 to 1"
	I0821 11:13:30.369475       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5c5cfc8747 to 1"
	I0821 11:13:30.372849       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0821 11:13:30.382599       1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0821 11:13:30.389554       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5c5cfc8747-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0821 11:13:30.404916       1 replica_set.go:544] sync "kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" failed with pods "kubernetes-dashboard-5c5cfc8747-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0821 11:13:30.408060       1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0821 11:13:30.408625       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0821 11:13:30.417859       1 replica_set.go:544] sync "kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" failed with pods "kubernetes-dashboard-5c5cfc8747-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0821 11:13:30.418323       1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0821 11:13:30.418396       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0821 11:13:30.418410       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5c5cfc8747-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0821 11:13:30.426159       1 replica_set.go:544] sync "kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" failed with pods "kubernetes-dashboard-5c5cfc8747-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0821 11:13:30.426647       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5c5cfc8747-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0821 11:13:30.429218       1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0821 11:13:30.429314       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0821 11:13:30.432529       1 replica_set.go:544] sync "kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" failed with pods "kubernetes-dashboard-5c5cfc8747-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0821 11:13:30.433217       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5c5cfc8747-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0821 11:13:30.448136       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5dd9cbfd69-6qh94"
	I0821 11:13:30.483862       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5c5cfc8747-rmknc"
	
	* 
	* ==> kube-proxy [4bd5ee59ab9b001ed8ca2687f1f8fd473adb5ef0a50eeb130c8aa4ce826366d6] <==
	* I0821 11:11:33.069123       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0821 11:11:33.069223       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0821 11:11:33.069244       1 server_others.go:554] "Using iptables proxy"
	I0821 11:11:33.096351       1 server_others.go:192] "Using iptables Proxier"
	I0821 11:11:33.096386       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0821 11:11:33.096395       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0821 11:11:33.096410       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0821 11:11:33.096482       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0821 11:11:33.097092       1 server.go:658] "Version info" version="v1.27.4"
	I0821 11:11:33.097110       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 11:11:33.098697       1 config.go:188] "Starting service config controller"
	I0821 11:11:33.098718       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0821 11:11:33.098735       1 config.go:97] "Starting endpoint slice config controller"
	I0821 11:11:33.098739       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0821 11:11:33.099110       1 config.go:315] "Starting node config controller"
	I0821 11:11:33.099127       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0821 11:11:33.199394       1 shared_informer.go:318] Caches are synced for node config
	I0821 11:11:33.199393       1 shared_informer.go:318] Caches are synced for service config
	I0821 11:11:33.199420       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [d7a4d7604c17f600b144a38c233e7ef3d22e66f4e873191f03b9236a92602b0f] <==
	* I0821 11:12:09.559266       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0821 11:12:09.559368       1 server_others.go:110] "Detected node IP" address="192.168.49.2"
	I0821 11:12:09.559388       1 server_others.go:554] "Using iptables proxy"
	I0821 11:12:09.664969       1 server_others.go:192] "Using iptables Proxier"
	I0821 11:12:09.665005       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0821 11:12:09.665013       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0821 11:12:09.665031       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0821 11:12:09.665102       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0821 11:12:09.665611       1 server.go:658] "Version info" version="v1.27.4"
	I0821 11:12:09.665629       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 11:12:09.668012       1 config.go:188] "Starting service config controller"
	I0821 11:12:09.668088       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0821 11:12:09.668127       1 config.go:97] "Starting endpoint slice config controller"
	I0821 11:12:09.668135       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0821 11:12:09.668686       1 config.go:315] "Starting node config controller"
	I0821 11:12:09.668702       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0821 11:12:09.769500       1 shared_informer.go:318] Caches are synced for node config
	I0821 11:12:09.769603       1 shared_informer.go:318] Caches are synced for service config
	I0821 11:12:09.769617       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [515329a7b37f65d557d6a79056bcf1c502ade26d4c88d4269b53fe1228e4c644] <==
	* I0821 11:12:06.463631       1 serving.go:348] Generated self-signed cert in-memory
	W0821 11:12:08.790419       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0821 11:12:08.791132       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 11:12:08.791193       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0821 11:12:08.791228       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0821 11:12:08.835402       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.4"
	I0821 11:12:08.835505       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 11:12:08.837451       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0821 11:12:08.838214       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0821 11:12:08.838282       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0821 11:12:08.838327       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0821 11:12:08.939642       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [fa94fb151b0e69905233a42bf872b05c601a68f8dfe8f423670ffa874830c31d] <==
	* E0821 11:11:32.728967       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0821 11:11:32.729037       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0821 11:11:32.729074       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0821 11:11:32.729148       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0821 11:11:32.729183       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0821 11:11:32.729277       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0821 11:11:32.729314       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0821 11:11:32.729385       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0821 11:11:32.729419       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0821 11:11:32.729485       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0821 11:11:32.729519       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0821 11:11:32.729589       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0821 11:11:32.729628       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0821 11:11:32.729758       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0821 11:11:32.729803       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0821 11:11:32.729916       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0821 11:11:32.729959       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0821 11:11:32.730032       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0821 11:11:32.730078       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0821 11:11:32.730144       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0821 11:11:32.730181       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0821 11:11:32.730214       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0821 11:11:32.730248       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0821 11:11:35.062560       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0821 11:11:51.200841       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	
	* 
	* ==> kubelet <==
	* Aug 21 11:13:21 functional-723696 kubelet[4462]: I0821 11:13:21.377110    4462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/cee3c053-5d88-4ac5-bd98-a508f77cf8ec-test-volume\") pod \"cee3c053-5d88-4ac5-bd98-a508f77cf8ec\" (UID: \"cee3c053-5d88-4ac5-bd98-a508f77cf8ec\") "
	Aug 21 11:13:21 functional-723696 kubelet[4462]: I0821 11:13:21.377177    4462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qczg9\" (UniqueName: \"kubernetes.io/projected/cee3c053-5d88-4ac5-bd98-a508f77cf8ec-kube-api-access-qczg9\") pod \"cee3c053-5d88-4ac5-bd98-a508f77cf8ec\" (UID: \"cee3c053-5d88-4ac5-bd98-a508f77cf8ec\") "
	Aug 21 11:13:21 functional-723696 kubelet[4462]: I0821 11:13:21.377180    4462 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cee3c053-5d88-4ac5-bd98-a508f77cf8ec-test-volume" (OuterVolumeSpecName: "test-volume") pod "cee3c053-5d88-4ac5-bd98-a508f77cf8ec" (UID: "cee3c053-5d88-4ac5-bd98-a508f77cf8ec"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 21 11:13:21 functional-723696 kubelet[4462]: I0821 11:13:21.379618    4462 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cee3c053-5d88-4ac5-bd98-a508f77cf8ec-kube-api-access-qczg9" (OuterVolumeSpecName: "kube-api-access-qczg9") pod "cee3c053-5d88-4ac5-bd98-a508f77cf8ec" (UID: "cee3c053-5d88-4ac5-bd98-a508f77cf8ec"). InnerVolumeSpecName "kube-api-access-qczg9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 21 11:13:21 functional-723696 kubelet[4462]: I0821 11:13:21.478337    4462 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qczg9\" (UniqueName: \"kubernetes.io/projected/cee3c053-5d88-4ac5-bd98-a508f77cf8ec-kube-api-access-qczg9\") on node \"functional-723696\" DevicePath \"\""
	Aug 21 11:13:21 functional-723696 kubelet[4462]: I0821 11:13:21.478381    4462 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/cee3c053-5d88-4ac5-bd98-a508f77cf8ec-test-volume\") on node \"functional-723696\" DevicePath \"\""
	Aug 21 11:13:22 functional-723696 kubelet[4462]: I0821 11:13:22.403116    4462 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="247f1c257ed2d4e35ba20220bff33c190e8c1645ff8e2fb2b5bdb01e2af0856f"
	Aug 21 11:13:30 functional-723696 kubelet[4462]: I0821 11:13:30.459372    4462 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=4.96457605 podCreationTimestamp="2023-08-21 11:13:05 +0000 UTC" firstStartedPulling="2023-08-21 11:13:05.742292961 +0000 UTC m=+62.949622923" lastFinishedPulling="2023-08-21 11:13:26.237043288 +0000 UTC m=+83.444373250" observedRunningTime="2023-08-21 11:13:26.428026157 +0000 UTC m=+83.635356119" watchObservedRunningTime="2023-08-21 11:13:30.459326377 +0000 UTC m=+87.666656356"
	Aug 21 11:13:30 functional-723696 kubelet[4462]: I0821 11:13:30.459717    4462 topology_manager.go:212] "Topology Admit Handler"
	Aug 21 11:13:30 functional-723696 kubelet[4462]: E0821 11:13:30.459773    4462 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cee3c053-5d88-4ac5-bd98-a508f77cf8ec" containerName="mount-munger"
	Aug 21 11:13:30 functional-723696 kubelet[4462]: I0821 11:13:30.459808    4462 memory_manager.go:346] "RemoveStaleState removing state" podUID="cee3c053-5d88-4ac5-bd98-a508f77cf8ec" containerName="mount-munger"
	Aug 21 11:13:30 functional-723696 kubelet[4462]: W0821 11:13:30.468330    4462 reflector.go:533] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:functional-723696" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'functional-723696' and this object
	Aug 21 11:13:30 functional-723696 kubelet[4462]: E0821 11:13:30.468372    4462 reflector.go:148] object-"kubernetes-dashboard"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:functional-723696" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'functional-723696' and this object
	Aug 21 11:13:30 functional-723696 kubelet[4462]: I0821 11:13:30.503915    4462 topology_manager.go:212] "Topology Admit Handler"
	Aug 21 11:13:30 functional-723696 kubelet[4462]: I0821 11:13:30.542237    4462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/621d1676-561a-4190-935a-5c2b53499204-tmp-volume\") pod \"dashboard-metrics-scraper-5dd9cbfd69-6qh94\" (UID: \"621d1676-561a-4190-935a-5c2b53499204\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69-6qh94"
	Aug 21 11:13:30 functional-723696 kubelet[4462]: I0821 11:13:30.542292    4462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgj8t\" (UniqueName: \"kubernetes.io/projected/621d1676-561a-4190-935a-5c2b53499204-kube-api-access-kgj8t\") pod \"dashboard-metrics-scraper-5dd9cbfd69-6qh94\" (UID: \"621d1676-561a-4190-935a-5c2b53499204\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69-6qh94"
	Aug 21 11:13:30 functional-723696 kubelet[4462]: I0821 11:13:30.542338    4462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e3f52a0c-5963-40b9-b7d9-0f51f88883f5-tmp-volume\") pod \"kubernetes-dashboard-5c5cfc8747-rmknc\" (UID: \"e3f52a0c-5963-40b9-b7d9-0f51f88883f5\") " pod="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747-rmknc"
	Aug 21 11:13:30 functional-723696 kubelet[4462]: I0821 11:13:30.542365    4462 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d569n\" (UniqueName: \"kubernetes.io/projected/e3f52a0c-5963-40b9-b7d9-0f51f88883f5-kube-api-access-d569n\") pod \"kubernetes-dashboard-5c5cfc8747-rmknc\" (UID: \"e3f52a0c-5963-40b9-b7d9-0f51f88883f5\") " pod="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747-rmknc"
	Aug 21 11:13:33 functional-723696 kubelet[4462]: I0821 11:13:33.276442    4462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"mypd\" (UniqueName: \"kubernetes.io/host-path/18e70554-21f6-4b11-87d5-da4704efecaa-pvc-75f05ff3-138e-4671-ad10-9c4d4f82e09d\") pod \"18e70554-21f6-4b11-87d5-da4704efecaa\" (UID: \"18e70554-21f6-4b11-87d5-da4704efecaa\") "
	Aug 21 11:13:33 functional-723696 kubelet[4462]: I0821 11:13:33.276510    4462 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-789ft\" (UniqueName: \"kubernetes.io/projected/18e70554-21f6-4b11-87d5-da4704efecaa-kube-api-access-789ft\") pod \"18e70554-21f6-4b11-87d5-da4704efecaa\" (UID: \"18e70554-21f6-4b11-87d5-da4704efecaa\") "
	Aug 21 11:13:33 functional-723696 kubelet[4462]: I0821 11:13:33.276929    4462 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18e70554-21f6-4b11-87d5-da4704efecaa-pvc-75f05ff3-138e-4671-ad10-9c4d4f82e09d" (OuterVolumeSpecName: "mypd") pod "18e70554-21f6-4b11-87d5-da4704efecaa" (UID: "18e70554-21f6-4b11-87d5-da4704efecaa"). InnerVolumeSpecName "pvc-75f05ff3-138e-4671-ad10-9c4d4f82e09d". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 21 11:13:33 functional-723696 kubelet[4462]: I0821 11:13:33.295397    4462 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18e70554-21f6-4b11-87d5-da4704efecaa-kube-api-access-789ft" (OuterVolumeSpecName: "kube-api-access-789ft") pod "18e70554-21f6-4b11-87d5-da4704efecaa" (UID: "18e70554-21f6-4b11-87d5-da4704efecaa"). InnerVolumeSpecName "kube-api-access-789ft". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 21 11:13:33 functional-723696 kubelet[4462]: I0821 11:13:33.376855    4462 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-789ft\" (UniqueName: \"kubernetes.io/projected/18e70554-21f6-4b11-87d5-da4704efecaa-kube-api-access-789ft\") on node \"functional-723696\" DevicePath \"\""
	Aug 21 11:13:33 functional-723696 kubelet[4462]: I0821 11:13:33.376896    4462 reconciler_common.go:300] "Volume detached for volume \"pvc-75f05ff3-138e-4671-ad10-9c4d4f82e09d\" (UniqueName: \"kubernetes.io/host-path/18e70554-21f6-4b11-87d5-da4704efecaa-pvc-75f05ff3-138e-4671-ad10-9c4d4f82e09d\") on node \"functional-723696\" DevicePath \"\""
	Aug 21 11:13:33 functional-723696 kubelet[4462]: I0821 11:13:33.436780    4462 scope.go:115] "RemoveContainer" containerID="6d715e281f45a1040308305276e05cfb5db289b296eb4b59c0de21e5e96c9bd9"
	
	* 
	* ==> storage-provisioner [35060d167433ce1e43ab9bc25eda9376f9eb372c8c0eddae4225de9abf00be31] <==
	* I0821 11:11:34.514371       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0821 11:11:34.629535       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0821 11:11:34.629633       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	* 
	* ==> storage-provisioner [8a08a4a8e49ef7a33c2e9fcd63a0b49f3269c1a1ee90f9b6327c8c70b5d12a1d] <==
	* I0821 11:12:09.464481       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0821 11:12:09.499356       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0821 11:12:09.499443       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0821 11:12:26.898047       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0821 11:12:26.898231       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-723696_944ed861-8a92-42e1-918b-4a8d5b34507d!
	I0821 11:12:26.899942       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e27852c7-49df-4514-8579-8e14ace6b2c1", APIVersion:"v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-723696_944ed861-8a92-42e1-918b-4a8d5b34507d became leader
	I0821 11:12:27.000401       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-723696_944ed861-8a92-42e1-918b-4a8d5b34507d!
	I0821 11:13:04.844135       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0821 11:13:04.844278       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    1d6f9543-f5ae-40c8-805e-bbd55a884a26 406 0 2023-08-21 11:10:36 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-08-21 11:10:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-75f05ff3-138e-4671-ad10-9c4d4f82e09d &PersistentVolumeClaim{ObjectMeta:{myclaim  default  75f05ff3-138e-4671-ad10-9c4d4f82e09d 748 0 2023-08-21 11:13:04 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-08-21 11:13:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-08-21 11:13:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0821 11:13:04.845691       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-75f05ff3-138e-4671-ad10-9c4d4f82e09d" provisioned
	I0821 11:13:04.845775       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0821 11:13:04.845816       1 volume_store.go:212] Trying to save persistentvolume "pvc-75f05ff3-138e-4671-ad10-9c4d4f82e09d"
	I0821 11:13:04.847055       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"75f05ff3-138e-4671-ad10-9c4d4f82e09d", APIVersion:"v1", ResourceVersion:"748", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0821 11:13:04.868594       1 volume_store.go:219] persistentvolume "pvc-75f05ff3-138e-4671-ad10-9c4d4f82e09d" saved
	I0821 11:13:04.869345       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"75f05ff3-138e-4671-ad10-9c4d4f82e09d", APIVersion:"v1", ResourceVersion:"748", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-75f05ff3-138e-4671-ad10-9c4d4f82e09d
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-723696 -n functional-723696
helpers_test.go:261: (dbg) Run:  kubectl --context functional-723696 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount sp-pod kubernetes-dashboard-5c5cfc8747-rmknc
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-723696 describe pod busybox-mount sp-pod kubernetes-dashboard-5c5cfc8747-rmknc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-723696 describe pod busybox-mount sp-pod kubernetes-dashboard-5c5cfc8747-rmknc: exit status 1 (135.372733ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-723696/192.168.49.2
	Start Time:       Mon, 21 Aug 2023 11:13:17 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://008cead82fc6275fcebed5ad21893f435d47bcbd3c2fce3ca7a3d1bb0cf74015
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 21 Aug 2023 11:13:19 +0000
	      Finished:     Mon, 21 Aug 2023 11:13:19 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qczg9 (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-qczg9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  17s   default-scheduler  Successfully assigned default/busybox-mount to functional-723696
	  Normal  Pulling    17s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     15s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.816554199s (1.816568739s including waiting)
	  Normal  Created    15s   kubelet            Created container mount-munger
	  Normal  Started    15s   kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-723696/192.168.49.2
	Start Time:       Mon, 21 Aug 2023 11:13:33 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vtxlq (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-vtxlq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/sp-pod to functional-723696
	  Normal  Pulling    0s    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-5c5cfc8747-rmknc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-723696 describe pod busybox-mount sp-pod kubernetes-dashboard-5c5cfc8747-rmknc: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (5.81s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (184.79s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-354854 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-354854 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.537268436s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-354854 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-354854 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d61f359e-6d50-4e73-8e52-ea52438b962b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d61f359e-6d50-4e73-8e52-ea52438b962b] Running
E0821 11:15:55.520521 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.01514546s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-354854 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0821 11:17:39.857747 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
E0821 11:17:39.863035 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
E0821 11:17:39.873339 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
E0821 11:17:39.893658 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
E0821 11:17:39.933966 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
E0821 11:17:40.014402 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
E0821 11:17:40.174859 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
E0821 11:17:40.495315 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
E0821 11:17:41.136214 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
E0821 11:17:42.416634 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
E0821 11:17:44.977891 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
E0821 11:17:50.099011 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
E0821 11:18:00.339584 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-354854 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.936445847s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-354854 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-354854 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0821 11:18:20.819798 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.019455224s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-354854 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-354854 addons disable ingress-dns --alsologtostderr -v=1: (1.217446881s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-354854 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-354854 addons disable ingress --alsologtostderr -v=1: (7.550292938s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-354854
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-354854:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3ab3cf9efad14572b1f267d275f03a646499e896f090f54a8615a2a474ba5802",
	        "Created": "2023-08-21T11:14:06.980660808Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2768944,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-21T11:14:07.307372515Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/3ab3cf9efad14572b1f267d275f03a646499e896f090f54a8615a2a474ba5802/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3ab3cf9efad14572b1f267d275f03a646499e896f090f54a8615a2a474ba5802/hostname",
	        "HostsPath": "/var/lib/docker/containers/3ab3cf9efad14572b1f267d275f03a646499e896f090f54a8615a2a474ba5802/hosts",
	        "LogPath": "/var/lib/docker/containers/3ab3cf9efad14572b1f267d275f03a646499e896f090f54a8615a2a474ba5802/3ab3cf9efad14572b1f267d275f03a646499e896f090f54a8615a2a474ba5802-json.log",
	        "Name": "/ingress-addon-legacy-354854",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-354854:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-354854",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/842434467df01649d5062742dcb24583a7ffe0937eeb39f833f194877131526d-init/diff:/var/lib/docker/overlay2/26861af3348249541ea382b8036362f60ea7ec122121fce2bcb8576e1879b2cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/842434467df01649d5062742dcb24583a7ffe0937eeb39f833f194877131526d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/842434467df01649d5062742dcb24583a7ffe0937eeb39f833f194877131526d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/842434467df01649d5062742dcb24583a7ffe0937eeb39f833f194877131526d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-354854",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-354854/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-354854",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-354854",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-354854",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7074c80d1ae893946780e3f9d214b8a2f36397314167f277d71eea361bf1adfb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36203"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36202"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36199"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36201"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36200"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7074c80d1ae8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-354854": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3ab3cf9efad1",
	                        "ingress-addon-legacy-354854"
	                    ],
	                    "NetworkID": "56c84d34f10e7d97c65f5c9505c3ee44af626e23075e054fc6d79c64497a56a2",
	                    "EndpointID": "c9dba9ef94ea1ef162d72eb88227a21a72b355cc65226a9b157332c5edbc9e17",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-354854 -n ingress-addon-legacy-354854
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-354854 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-354854 logs -n 25: (1.413648348s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-723696 ssh findmnt        | functional-723696           | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|                | -T /mount2                           |                             |         |         |                     |                     |
	| ssh            | functional-723696 ssh findmnt        | functional-723696           | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|                | -T /mount3                           |                             |         |         |                     |                     |
	| mount          | -p functional-723696                 | functional-723696           | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC |                     |
	|                | --kill=true                          |                             |         |         |                     |                     |
	| start          | -p functional-723696                 | functional-723696           | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC |                     |
	|                | --dry-run --memory                   |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                             |         |         |                     |                     |
	|                | --driver=docker                      |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| start          | -p functional-723696                 | functional-723696           | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC |                     |
	|                | --dry-run --alsologtostderr          |                             |         |         |                     |                     |
	|                | -v=1 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| start          | -p functional-723696                 | functional-723696           | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC |                     |
	|                | --dry-run --memory                   |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                             |         |         |                     |                     |
	|                | --driver=docker                      |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| dashboard      | --url --port 36195                   | functional-723696           | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC |                     |
	|                | -p functional-723696                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| update-context | functional-723696                    | functional-723696           | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-723696                    | functional-723696           | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-723696                    | functional-723696           | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| image          | functional-723696                    | functional-723696           | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|                | image ls --format short              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-723696                    | functional-723696           | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|                | image ls --format yaml               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| ssh            | functional-723696 ssh pgrep          | functional-723696           | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC |                     |
	|                | buildkitd                            |                             |         |         |                     |                     |
	| image          | functional-723696 image build -t     | functional-723696           | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|                | localhost/my-image:functional-723696 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                             |         |         |                     |                     |
	| image          | functional-723696 image ls           | functional-723696           | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	| image          | functional-723696                    | functional-723696           | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|                | image ls --format json               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-723696                    | functional-723696           | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	|                | image ls --format table              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| delete         | -p functional-723696                 | functional-723696           | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:13 UTC |
	| start          | -p ingress-addon-legacy-354854       | ingress-addon-legacy-354854 | jenkins | v1.31.2 | 21 Aug 23 11:13 UTC | 21 Aug 23 11:15 UTC |
	|                | --kubernetes-version=v1.18.20        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true            |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-354854          | ingress-addon-legacy-354854 | jenkins | v1.31.2 | 21 Aug 23 11:15 UTC | 21 Aug 23 11:15 UTC |
	|                | addons enable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-354854          | ingress-addon-legacy-354854 | jenkins | v1.31.2 | 21 Aug 23 11:15 UTC | 21 Aug 23 11:15 UTC |
	|                | addons enable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-354854          | ingress-addon-legacy-354854 | jenkins | v1.31.2 | 21 Aug 23 11:15 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-354854 ip       | ingress-addon-legacy-354854 | jenkins | v1.31.2 | 21 Aug 23 11:18 UTC | 21 Aug 23 11:18 UTC |
	| addons         | ingress-addon-legacy-354854          | ingress-addon-legacy-354854 | jenkins | v1.31.2 | 21 Aug 23 11:18 UTC | 21 Aug 23 11:18 UTC |
	|                | addons disable ingress-dns           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-354854          | ingress-addon-legacy-354854 | jenkins | v1.31.2 | 21 Aug 23 11:18 UTC | 21 Aug 23 11:18 UTC |
	|                | addons disable ingress               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 11:13:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 11:13:49.151223 2768494 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:13:49.151384 2768494 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:13:49.151391 2768494 out.go:309] Setting ErrFile to fd 2...
	I0821 11:13:49.151399 2768494 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:13:49.151627 2768494 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
	I0821 11:13:49.152037 2768494 out.go:303] Setting JSON to false
	I0821 11:13:49.153338 2768494 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":71773,"bootTime":1692544656,"procs":478,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0821 11:13:49.153408 2768494 start.go:138] virtualization:  
	I0821 11:13:49.157207 2768494 out.go:177] * [ingress-addon-legacy-354854] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0821 11:13:49.159080 2768494 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 11:13:49.160706 2768494 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 11:13:49.159324 2768494 notify.go:220] Checking for updates...
	I0821 11:13:49.164377 2768494 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:13:49.166697 2768494 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	I0821 11:13:49.168756 2768494 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0821 11:13:49.170766 2768494 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 11:13:49.173142 2768494 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 11:13:49.197343 2768494 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 11:13:49.197502 2768494 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:13:49.289932 2768494 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-08-21 11:13:49.27946276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:13:49.290042 2768494 docker.go:294] overlay module found
	I0821 11:13:49.292038 2768494 out.go:177] * Using the docker driver based on user configuration
	I0821 11:13:49.293971 2768494 start.go:298] selected driver: docker
	I0821 11:13:49.293995 2768494 start.go:902] validating driver "docker" against <nil>
	I0821 11:13:49.294010 2768494 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 11:13:49.294626 2768494 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:13:49.361342 2768494 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2023-08-21 11:13:49.351614671 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:13:49.361510 2768494 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 11:13:49.361719 2768494 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 11:13:49.363607 2768494 out.go:177] * Using Docker driver with root privileges
	I0821 11:13:49.365473 2768494 cni.go:84] Creating CNI manager for ""
	I0821 11:13:49.365485 2768494 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 11:13:49.365503 2768494 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0821 11:13:49.365523 2768494 start_flags.go:319] config:
	{Name:ingress-addon-legacy-354854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-354854 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:13:49.367555 2768494 out.go:177] * Starting control plane node ingress-addon-legacy-354854 in cluster ingress-addon-legacy-354854
	I0821 11:13:49.369447 2768494 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 11:13:49.371305 2768494 out.go:177] * Pulling base image ...
	I0821 11:13:49.373123 2768494 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0821 11:13:49.373154 2768494 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0821 11:13:49.389601 2768494 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0821 11:13:49.389628 2768494 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0821 11:13:49.437908 2768494 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0821 11:13:49.437932 2768494 cache.go:57] Caching tarball of preloaded images
	I0821 11:13:49.438087 2768494 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0821 11:13:49.440250 2768494 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0821 11:13:49.442128 2768494 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0821 11:13:49.563230 2768494 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0821 11:13:59.127383 2768494 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0821 11:13:59.127489 2768494 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0821 11:14:00.293740 2768494 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0821 11:14:00.294180 2768494 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/config.json ...
	I0821 11:14:00.294216 2768494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/config.json: {Name:mk02c25ca022e6401dbe5ec9b5d6c53eb900d465 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:14:00.294411 2768494 cache.go:195] Successfully downloaded all kic artifacts
	I0821 11:14:00.294473 2768494 start.go:365] acquiring machines lock for ingress-addon-legacy-354854: {Name:mk447b02c4c285227e18f8520db0d18686df467c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:14:00.294530 2768494 start.go:369] acquired machines lock for "ingress-addon-legacy-354854" in 42.305µs
	I0821 11:14:00.294552 2768494 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-354854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-354854 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0821 11:14:00.294626 2768494 start.go:125] createHost starting for "" (driver="docker")
	I0821 11:14:00.296924 2768494 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0821 11:14:00.297156 2768494 start.go:159] libmachine.API.Create for "ingress-addon-legacy-354854" (driver="docker")
	I0821 11:14:00.297186 2768494 client.go:168] LocalClient.Create starting
	I0821 11:14:00.297266 2768494 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem
	I0821 11:14:00.297307 2768494 main.go:141] libmachine: Decoding PEM data...
	I0821 11:14:00.297325 2768494 main.go:141] libmachine: Parsing certificate...
	I0821 11:14:00.297386 2768494 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem
	I0821 11:14:00.297410 2768494 main.go:141] libmachine: Decoding PEM data...
	I0821 11:14:00.297424 2768494 main.go:141] libmachine: Parsing certificate...
	I0821 11:14:00.297810 2768494 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-354854 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0821 11:14:00.316380 2768494 cli_runner.go:211] docker network inspect ingress-addon-legacy-354854 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0821 11:14:00.316607 2768494 network_create.go:281] running [docker network inspect ingress-addon-legacy-354854] to gather additional debugging logs...
	I0821 11:14:00.316649 2768494 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-354854
	W0821 11:14:00.335307 2768494 cli_runner.go:211] docker network inspect ingress-addon-legacy-354854 returned with exit code 1
	I0821 11:14:00.335350 2768494 network_create.go:284] error running [docker network inspect ingress-addon-legacy-354854]: docker network inspect ingress-addon-legacy-354854: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-354854 not found
	I0821 11:14:00.335364 2768494 network_create.go:286] output of [docker network inspect ingress-addon-legacy-354854]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-354854 not found
	
	** /stderr **
	I0821 11:14:00.335436 2768494 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0821 11:14:00.353722 2768494 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40009f36f0}
	I0821 11:14:00.353766 2768494 network_create.go:123] attempt to create docker network ingress-addon-legacy-354854 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0821 11:14:00.353830 2768494 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-354854 ingress-addon-legacy-354854
	I0821 11:14:00.425693 2768494 network_create.go:107] docker network ingress-addon-legacy-354854 192.168.49.0/24 created
	I0821 11:14:00.425720 2768494 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-354854" container
	I0821 11:14:00.425808 2768494 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0821 11:14:00.442869 2768494 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-354854 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-354854 --label created_by.minikube.sigs.k8s.io=true
	I0821 11:14:00.460965 2768494 oci.go:103] Successfully created a docker volume ingress-addon-legacy-354854
	I0821 11:14:00.461052 2768494 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-354854-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-354854 --entrypoint /usr/bin/test -v ingress-addon-legacy-354854:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0821 11:14:01.979505 2768494 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-354854-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-354854 --entrypoint /usr/bin/test -v ingress-addon-legacy-354854:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib: (1.518408601s)
	I0821 11:14:01.979535 2768494 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-354854
	I0821 11:14:01.979562 2768494 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0821 11:14:01.979581 2768494 kic.go:190] Starting extracting preloaded images to volume ...
	I0821 11:14:01.979680 2768494 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-354854:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0821 11:14:06.898335 2768494 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-354854:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.918611759s)
	I0821 11:14:06.898370 2768494 kic.go:199] duration metric: took 4.918785 seconds to extract preloaded images to volume
	W0821 11:14:06.898519 2768494 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0821 11:14:06.898636 2768494 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0821 11:14:06.964486 2768494 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-354854 --name ingress-addon-legacy-354854 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-354854 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-354854 --network ingress-addon-legacy-354854 --ip 192.168.49.2 --volume ingress-addon-legacy-354854:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0821 11:14:07.315084 2768494 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-354854 --format={{.State.Running}}
	I0821 11:14:07.339049 2768494 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-354854 --format={{.State.Status}}
	I0821 11:14:07.360544 2768494 cli_runner.go:164] Run: docker exec ingress-addon-legacy-354854 stat /var/lib/dpkg/alternatives/iptables
	I0821 11:14:07.439003 2768494 oci.go:144] the created container "ingress-addon-legacy-354854" has a running status.
	I0821 11:14:07.439029 2768494 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/ingress-addon-legacy-354854/id_rsa...
	I0821 11:14:07.775001 2768494 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/ingress-addon-legacy-354854/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0821 11:14:07.775123 2768494 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/ingress-addon-legacy-354854/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0821 11:14:07.824110 2768494 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-354854 --format={{.State.Status}}
	I0821 11:14:07.848863 2768494 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0821 11:14:07.848887 2768494 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-354854 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0821 11:14:07.939562 2768494 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-354854 --format={{.State.Status}}
	I0821 11:14:07.967258 2768494 machine.go:88] provisioning docker machine ...
	I0821 11:14:07.967294 2768494 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-354854"
	I0821 11:14:07.967361 2768494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-354854
	I0821 11:14:08.001839 2768494 main.go:141] libmachine: Using SSH client type: native
	I0821 11:14:08.002348 2768494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36203 <nil> <nil>}
	I0821 11:14:08.002371 2768494 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-354854 && echo "ingress-addon-legacy-354854" | sudo tee /etc/hostname
	I0821 11:14:08.003187 2768494 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43502->127.0.0.1:36203: read: connection reset by peer
	I0821 11:14:11.144293 2768494 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-354854
	
	I0821 11:14:11.144441 2768494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-354854
	I0821 11:14:11.163859 2768494 main.go:141] libmachine: Using SSH client type: native
	I0821 11:14:11.164293 2768494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36203 <nil> <nil>}
	I0821 11:14:11.164311 2768494 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-354854' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-354854/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-354854' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 11:14:11.291006 2768494 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 11:14:11.291045 2768494 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17102-2734539/.minikube CaCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17102-2734539/.minikube}
	I0821 11:14:11.291075 2768494 ubuntu.go:177] setting up certificates
	I0821 11:14:11.291084 2768494 provision.go:83] configureAuth start
	I0821 11:14:11.291155 2768494 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-354854
	I0821 11:14:11.310052 2768494 provision.go:138] copyHostCerts
	I0821 11:14:11.310092 2768494 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem
	I0821 11:14:11.310122 2768494 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem, removing ...
	I0821 11:14:11.310131 2768494 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem
	I0821 11:14:11.310205 2768494 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem (1675 bytes)
	I0821 11:14:11.310282 2768494 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem
	I0821 11:14:11.310304 2768494 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem, removing ...
	I0821 11:14:11.310312 2768494 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem
	I0821 11:14:11.310339 2768494 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem (1078 bytes)
	I0821 11:14:11.310387 2768494 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem
	I0821 11:14:11.310407 2768494 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem, removing ...
	I0821 11:14:11.310411 2768494 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem
	I0821 11:14:11.310439 2768494 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem (1123 bytes)
	I0821 11:14:11.310533 2768494 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-354854 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-354854]
	I0821 11:14:11.784537 2768494 provision.go:172] copyRemoteCerts
	I0821 11:14:11.784608 2768494 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 11:14:11.784653 2768494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-354854
	I0821 11:14:11.802323 2768494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36203 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/ingress-addon-legacy-354854/id_rsa Username:docker}
	I0821 11:14:11.896275 2768494 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0821 11:14:11.896369 2768494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 11:14:11.923980 2768494 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0821 11:14:11.924036 2768494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0821 11:14:11.950699 2768494 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0821 11:14:11.950760 2768494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0821 11:14:11.977685 2768494 provision.go:86] duration metric: configureAuth took 686.584723ms
	I0821 11:14:11.977709 2768494 ubuntu.go:193] setting minikube options for container-runtime
	I0821 11:14:11.977931 2768494 config.go:182] Loaded profile config "ingress-addon-legacy-354854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0821 11:14:11.978039 2768494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-354854
	I0821 11:14:11.995036 2768494 main.go:141] libmachine: Using SSH client type: native
	I0821 11:14:11.995470 2768494 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36203 <nil> <nil>}
	I0821 11:14:11.995491 2768494 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0821 11:14:12.273164 2768494 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0821 11:14:12.273183 2768494 machine.go:91] provisioned docker machine in 4.3059062s
	I0821 11:14:12.273192 2768494 client.go:171] LocalClient.Create took 11.976000808s
	I0821 11:14:12.273217 2768494 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-354854" took 11.976050374s
	I0821 11:14:12.273225 2768494 start.go:300] post-start starting for "ingress-addon-legacy-354854" (driver="docker")
	I0821 11:14:12.273235 2768494 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 11:14:12.273296 2768494 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 11:14:12.273341 2768494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-354854
	I0821 11:14:12.290958 2768494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36203 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/ingress-addon-legacy-354854/id_rsa Username:docker}
	I0821 11:14:12.389903 2768494 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 11:14:12.394880 2768494 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0821 11:14:12.394958 2768494 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0821 11:14:12.394988 2768494 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0821 11:14:12.395011 2768494 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0821 11:14:12.395044 2768494 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-2734539/.minikube/addons for local assets ...
	I0821 11:14:12.395120 2768494 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-2734539/.minikube/files for local assets ...
	I0821 11:14:12.395229 2768494 filesync.go:149] local asset: /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem -> 27399302.pem in /etc/ssl/certs
	I0821 11:14:12.395260 2768494 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem -> /etc/ssl/certs/27399302.pem
	I0821 11:14:12.395396 2768494 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0821 11:14:12.407037 2768494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem --> /etc/ssl/certs/27399302.pem (1708 bytes)
	I0821 11:14:12.444951 2768494 start.go:303] post-start completed in 171.711178ms
	I0821 11:14:12.445361 2768494 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-354854
	I0821 11:14:12.462907 2768494 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/config.json ...
	I0821 11:14:12.463187 2768494 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 11:14:12.463236 2768494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-354854
	I0821 11:14:12.479731 2768494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36203 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/ingress-addon-legacy-354854/id_rsa Username:docker}
	I0821 11:14:12.567957 2768494 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0821 11:14:12.573691 2768494 start.go:128] duration metric: createHost completed in 12.279050511s
	I0821 11:14:12.573713 2768494 start.go:83] releasing machines lock for "ingress-addon-legacy-354854", held for 12.27917457s
	I0821 11:14:12.573795 2768494 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-354854
	I0821 11:14:12.590790 2768494 ssh_runner.go:195] Run: cat /version.json
	I0821 11:14:12.590823 2768494 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 11:14:12.590841 2768494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-354854
	I0821 11:14:12.590885 2768494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-354854
	I0821 11:14:12.615214 2768494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36203 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/ingress-addon-legacy-354854/id_rsa Username:docker}
	I0821 11:14:12.627175 2768494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36203 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/ingress-addon-legacy-354854/id_rsa Username:docker}
	I0821 11:14:12.847717 2768494 ssh_runner.go:195] Run: systemctl --version
	I0821 11:14:12.853190 2768494 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0821 11:14:13.005989 2768494 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0821 11:14:13.011668 2768494 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:14:13.036446 2768494 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0821 11:14:13.036581 2768494 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:14:13.073499 2768494 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0821 11:14:13.073523 2768494 start.go:466] detecting cgroup driver to use...
	I0821 11:14:13.073585 2768494 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0821 11:14:13.073653 2768494 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 11:14:13.092894 2768494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 11:14:13.106382 2768494 docker.go:196] disabling cri-docker service (if available) ...
	I0821 11:14:13.106446 2768494 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0821 11:14:13.122170 2768494 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0821 11:14:13.138826 2768494 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0821 11:14:13.239201 2768494 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0821 11:14:13.348252 2768494 docker.go:212] disabling docker service ...
	I0821 11:14:13.348367 2768494 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0821 11:14:13.373505 2768494 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0821 11:14:13.389586 2768494 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0821 11:14:13.489003 2768494 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0821 11:14:13.597077 2768494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0821 11:14:13.609972 2768494 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 11:14:13.628706 2768494 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0821 11:14:13.628803 2768494 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:14:13.640488 2768494 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0821 11:14:13.640600 2768494 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:14:13.652343 2768494 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:14:13.663577 2768494 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:14:13.675024 2768494 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0821 11:14:13.685403 2768494 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0821 11:14:13.695580 2768494 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0821 11:14:13.705336 2768494 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 11:14:13.799969 2768494 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0821 11:14:13.931929 2768494 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0821 11:14:13.932025 2768494 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0821 11:14:13.936691 2768494 start.go:534] Will wait 60s for crictl version
	I0821 11:14:13.936754 2768494 ssh_runner.go:195] Run: which crictl
	I0821 11:14:13.941014 2768494 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0821 11:14:13.981831 2768494 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0821 11:14:13.981938 2768494 ssh_runner.go:195] Run: crio --version
	I0821 11:14:14.030803 2768494 ssh_runner.go:195] Run: crio --version
	I0821 11:14:14.077765 2768494 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0821 11:14:14.079742 2768494 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-354854 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0821 11:14:14.097021 2768494 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0821 11:14:14.101616 2768494 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 11:14:14.115101 2768494 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0821 11:14:14.115177 2768494 ssh_runner.go:195] Run: sudo crictl images --output json
	I0821 11:14:14.170618 2768494 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0821 11:14:14.170691 2768494 ssh_runner.go:195] Run: which lz4
	I0821 11:14:14.175051 2768494 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0821 11:14:14.175157 2768494 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0821 11:14:14.179568 2768494 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0821 11:14:14.179600 2768494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0821 11:14:16.344039 2768494 crio.go:444] Took 2.168923 seconds to copy over tarball
	I0821 11:14:16.344126 2768494 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0821 11:14:18.990698 2768494 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.646535135s)
	I0821 11:14:18.990747 2768494 crio.go:451] Took 2.646668 seconds to extract the tarball
	I0821 11:14:18.990757 2768494 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0821 11:14:19.077778 2768494 ssh_runner.go:195] Run: sudo crictl images --output json
	I0821 11:14:19.118036 2768494 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0821 11:14:19.118060 2768494 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0821 11:14:19.118153 2768494 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 11:14:19.118364 2768494 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0821 11:14:19.118474 2768494 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0821 11:14:19.118540 2768494 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0821 11:14:19.118613 2768494 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0821 11:14:19.118668 2768494 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0821 11:14:19.118726 2768494 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0821 11:14:19.118787 2768494 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0821 11:14:19.119714 2768494 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0821 11:14:19.120512 2768494 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0821 11:14:19.120706 2768494 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 11:14:19.120971 2768494 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0821 11:14:19.121134 2768494 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0821 11:14:19.121262 2768494 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0821 11:14:19.121394 2768494 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0821 11:14:19.122243 2768494 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	W0821 11:14:19.581532 2768494 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0821 11:14:19.581706 2768494 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W0821 11:14:19.604069 2768494 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0821 11:14:19.604380 2768494 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W0821 11:14:19.608800 2768494 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0821 11:14:19.609058 2768494 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W0821 11:14:19.640566 2768494 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0821 11:14:19.640936 2768494 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0821 11:14:19.643179 2768494 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0821 11:14:19.643261 2768494 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0821 11:14:19.643416 2768494 ssh_runner.go:195] Run: which crictl
	I0821 11:14:19.652132 2768494 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0821 11:14:19.662288 2768494 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0821 11:14:19.662521 2768494 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W0821 11:14:19.665708 2768494 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0821 11:14:19.665945 2768494 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0821 11:14:19.704880 2768494 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0821 11:14:19.704918 2768494 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0821 11:14:19.704967 2768494 ssh_runner.go:195] Run: which crictl
	W0821 11:14:19.713787 2768494 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0821 11:14:19.713968 2768494 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 11:14:19.758745 2768494 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0821 11:14:19.758784 2768494 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0821 11:14:19.758829 2768494 ssh_runner.go:195] Run: which crictl
	I0821 11:14:19.819288 2768494 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0821 11:14:19.819377 2768494 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0821 11:14:19.819451 2768494 ssh_runner.go:195] Run: which crictl
	I0821 11:14:19.819566 2768494 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0821 11:14:19.828210 2768494 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0821 11:14:19.828299 2768494 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0821 11:14:19.828370 2768494 ssh_runner.go:195] Run: which crictl
	I0821 11:14:19.828487 2768494 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0821 11:14:19.828536 2768494 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0821 11:14:19.828579 2768494 ssh_runner.go:195] Run: which crictl
	I0821 11:14:19.861122 2768494 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0821 11:14:19.861211 2768494 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0821 11:14:19.861283 2768494 ssh_runner.go:195] Run: which crictl
	I0821 11:14:19.861401 2768494 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0821 11:14:19.985605 2768494 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0821 11:14:19.985742 2768494 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0821 11:14:19.985788 2768494 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0821 11:14:19.985906 2768494 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0821 11:14:19.985973 2768494 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0821 11:14:19.986046 2768494 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0821 11:14:19.986080 2768494 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0821 11:14:19.986165 2768494 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0821 11:14:19.986191 2768494 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 11:14:19.986218 2768494 ssh_runner.go:195] Run: which crictl
	I0821 11:14:20.124949 2768494 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 11:14:20.125013 2768494 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0821 11:14:20.125047 2768494 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0821 11:14:20.125082 2768494 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0821 11:14:20.125090 2768494 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0821 11:14:20.125163 2768494 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0821 11:14:20.179144 2768494 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0821 11:14:20.179218 2768494 cache_images.go:92] LoadImages completed in 1.061144513s
	W0821 11:14:20.179291 2768494 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I0821 11:14:20.179358 2768494 ssh_runner.go:195] Run: crio config
	I0821 11:14:20.237496 2768494 cni.go:84] Creating CNI manager for ""
	I0821 11:14:20.237515 2768494 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 11:14:20.237548 2768494 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0821 11:14:20.237566 2768494 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-354854 NodeName:ingress-addon-legacy-354854 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0821 11:14:20.237775 2768494 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-354854"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0821 11:14:20.237862 2768494 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-354854 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-354854 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0821 11:14:20.237950 2768494 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0821 11:14:20.248611 2768494 binaries.go:44] Found k8s binaries, skipping transfer
	I0821 11:14:20.248700 2768494 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0821 11:14:20.259343 2768494 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0821 11:14:20.279966 2768494 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0821 11:14:20.301979 2768494 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0821 11:14:20.322892 2768494 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0821 11:14:20.327105 2768494 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 11:14:20.340142 2768494 certs.go:56] Setting up /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854 for IP: 192.168.49.2
	I0821 11:14:20.340214 2768494 certs.go:190] acquiring lock for shared ca certs: {Name:mkf22db11ef8c10db9220127fbe1c5ce3b246b6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:14:20.340387 2768494 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.key
	I0821 11:14:20.340439 2768494 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.key
	I0821 11:14:20.340496 2768494 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.key
	I0821 11:14:20.340511 2768494 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt with IP's: []
	I0821 11:14:21.144371 2768494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt ...
	I0821 11:14:21.144404 2768494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: {Name:mk6f55e7b68c0f5003abbe62a836ec15187e165d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:14:21.144600 2768494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.key ...
	I0821 11:14:21.144613 2768494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.key: {Name:mk86518c7aacbaec636136d8e29b89a9ad11ff17 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:14:21.144704 2768494 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/apiserver.key.dd3b5fb2
	I0821 11:14:21.144721 2768494 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0821 11:14:21.469606 2768494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/apiserver.crt.dd3b5fb2 ...
	I0821 11:14:21.469637 2768494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/apiserver.crt.dd3b5fb2: {Name:mk011ed93777f15f4654bed7e2330a7fdcfbe4b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:14:21.469821 2768494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/apiserver.key.dd3b5fb2 ...
	I0821 11:14:21.469833 2768494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/apiserver.key.dd3b5fb2: {Name:mka0f8fe4180c7e27380e3a5d6d63ee9f905311c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:14:21.469937 2768494 certs.go:337] copying /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/apiserver.crt
	I0821 11:14:21.470016 2768494 certs.go:341] copying /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/apiserver.key
	I0821 11:14:21.470074 2768494 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/proxy-client.key
	I0821 11:14:21.470086 2768494 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/proxy-client.crt with IP's: []
	I0821 11:14:21.963560 2768494 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/proxy-client.crt ...
	I0821 11:14:21.963593 2768494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/proxy-client.crt: {Name:mk1483c9755a4db6e436e423c175dcb46bb80e14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:14:21.963773 2768494 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/proxy-client.key ...
	I0821 11:14:21.963786 2768494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/proxy-client.key: {Name:mkec498a3ce8979ef5c0f03617da074107d18ead Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:14:21.963869 2768494 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0821 11:14:21.963888 2768494 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0821 11:14:21.963900 2768494 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0821 11:14:21.963917 2768494 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0821 11:14:21.963935 2768494 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0821 11:14:21.963949 2768494 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0821 11:14:21.963964 2768494 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0821 11:14:21.963976 2768494 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0821 11:14:21.964032 2768494 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/2739930.pem (1338 bytes)
	W0821 11:14:21.964073 2768494 certs.go:433] ignoring /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/2739930_empty.pem, impossibly tiny 0 bytes
	I0821 11:14:21.964090 2768494 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca-key.pem (1679 bytes)
	I0821 11:14:21.964115 2768494 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem (1078 bytes)
	I0821 11:14:21.964143 2768494 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem (1123 bytes)
	I0821 11:14:21.964165 2768494 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem (1675 bytes)
	I0821 11:14:21.964220 2768494 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem (1708 bytes)
	I0821 11:14:21.964257 2768494 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem -> /usr/share/ca-certificates/27399302.pem
	I0821 11:14:21.964275 2768494 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0821 11:14:21.964285 2768494 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/2739930.pem -> /usr/share/ca-certificates/2739930.pem
	I0821 11:14:21.964841 2768494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0821 11:14:21.994735 2768494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0821 11:14:22.024320 2768494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0821 11:14:22.053604 2768494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0821 11:14:22.081542 2768494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0821 11:14:22.109370 2768494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0821 11:14:22.136747 2768494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0821 11:14:22.164217 2768494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0821 11:14:22.191366 2768494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem --> /usr/share/ca-certificates/27399302.pem (1708 bytes)
	I0821 11:14:22.219297 2768494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0821 11:14:22.246316 2768494 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/2739930.pem --> /usr/share/ca-certificates/2739930.pem (1338 bytes)
	I0821 11:14:22.273313 2768494 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0821 11:14:22.293526 2768494 ssh_runner.go:195] Run: openssl version
	I0821 11:14:22.300626 2768494 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0821 11:14:22.312170 2768494 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0821 11:14:22.316781 2768494 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 21 11:03 /usr/share/ca-certificates/minikubeCA.pem
	I0821 11:14:22.316847 2768494 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0821 11:14:22.325396 2768494 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0821 11:14:22.337065 2768494 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2739930.pem && ln -fs /usr/share/ca-certificates/2739930.pem /etc/ssl/certs/2739930.pem"
	I0821 11:14:22.348668 2768494 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2739930.pem
	I0821 11:14:22.353150 2768494 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 21 11:09 /usr/share/ca-certificates/2739930.pem
	I0821 11:14:22.353211 2768494 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2739930.pem
	I0821 11:14:22.361345 2768494 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2739930.pem /etc/ssl/certs/51391683.0"
	I0821 11:14:22.372368 2768494 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27399302.pem && ln -fs /usr/share/ca-certificates/27399302.pem /etc/ssl/certs/27399302.pem"
	I0821 11:14:22.383488 2768494 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27399302.pem
	I0821 11:14:22.388162 2768494 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 21 11:09 /usr/share/ca-certificates/27399302.pem
	I0821 11:14:22.388232 2768494 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27399302.pem
	I0821 11:14:22.396440 2768494 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27399302.pem /etc/ssl/certs/3ec20f2e.0"
	I0821 11:14:22.407499 2768494 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0821 11:14:22.411705 2768494 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0821 11:14:22.411753 2768494 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-354854 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-354854 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:14:22.411838 2768494 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0821 11:14:22.411891 2768494 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0821 11:14:22.452043 2768494 cri.go:89] found id: ""
	I0821 11:14:22.452119 2768494 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0821 11:14:22.462328 2768494 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0821 11:14:22.472429 2768494 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0821 11:14:22.472544 2768494 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0821 11:14:22.483043 2768494 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0821 11:14:22.483104 2768494 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0821 11:14:22.537806 2768494 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0821 11:14:22.538048 2768494 kubeadm.go:322] [preflight] Running pre-flight checks
	I0821 11:14:22.589427 2768494 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0821 11:14:22.589554 2768494 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1041-aws
	I0821 11:14:22.589603 2768494 kubeadm.go:322] OS: Linux
	I0821 11:14:22.589656 2768494 kubeadm.go:322] CGROUPS_CPU: enabled
	I0821 11:14:22.589705 2768494 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0821 11:14:22.589782 2768494 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0821 11:14:22.589837 2768494 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0821 11:14:22.589909 2768494 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0821 11:14:22.589958 2768494 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0821 11:14:22.690453 2768494 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0821 11:14:22.690567 2768494 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0821 11:14:22.690661 2768494 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0821 11:14:22.922097 2768494 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0821 11:14:22.923620 2768494 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0821 11:14:22.923894 2768494 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0821 11:14:23.026263 2768494 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0821 11:14:23.030916 2768494 out.go:204]   - Generating certificates and keys ...
	I0821 11:14:23.031108 2768494 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0821 11:14:23.031229 2768494 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0821 11:14:23.294978 2768494 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0821 11:14:24.247546 2768494 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0821 11:14:24.759464 2768494 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0821 11:14:25.057290 2768494 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0821 11:14:25.934971 2768494 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0821 11:14:25.935373 2768494 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-354854 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0821 11:14:26.204169 2768494 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0821 11:14:26.204555 2768494 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-354854 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0821 11:14:26.397523 2768494 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0821 11:14:26.807909 2768494 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0821 11:14:27.470485 2768494 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0821 11:14:27.470888 2768494 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0821 11:14:27.803232 2768494 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0821 11:14:28.107033 2768494 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0821 11:14:28.516622 2768494 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0821 11:14:28.866349 2768494 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0821 11:14:28.867267 2768494 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0821 11:14:28.874915 2768494 out.go:204]   - Booting up control plane ...
	I0821 11:14:28.875025 2768494 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0821 11:14:28.883276 2768494 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0821 11:14:28.889073 2768494 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0821 11:14:28.900586 2768494 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0821 11:14:28.902382 2768494 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0821 11:14:41.904992 2768494 kubeadm.go:322] [apiclient] All control plane components are healthy after 13.002596 seconds
	I0821 11:14:41.905107 2768494 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0821 11:14:41.918484 2768494 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0821 11:14:42.437431 2768494 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0821 11:14:42.437615 2768494 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-354854 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0821 11:14:42.945211 2768494 kubeadm.go:322] [bootstrap-token] Using token: lvr7jn.abzu5d6dy061zbg8
	I0821 11:14:42.947455 2768494 out.go:204]   - Configuring RBAC rules ...
	I0821 11:14:42.947576 2768494 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0821 11:14:42.962519 2768494 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0821 11:14:42.973017 2768494 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0821 11:14:42.976314 2768494 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0821 11:14:42.979332 2768494 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0821 11:14:42.983487 2768494 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0821 11:14:42.994630 2768494 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0821 11:14:43.288409 2768494 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0821 11:14:43.385145 2768494 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0821 11:14:43.386699 2768494 kubeadm.go:322] 
	I0821 11:14:43.386772 2768494 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0821 11:14:43.386786 2768494 kubeadm.go:322] 
	I0821 11:14:43.386859 2768494 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0821 11:14:43.386868 2768494 kubeadm.go:322] 
	I0821 11:14:43.386893 2768494 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0821 11:14:43.386954 2768494 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0821 11:14:43.387008 2768494 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0821 11:14:43.387017 2768494 kubeadm.go:322] 
	I0821 11:14:43.387066 2768494 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0821 11:14:43.387138 2768494 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0821 11:14:43.387207 2768494 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0821 11:14:43.387219 2768494 kubeadm.go:322] 
	I0821 11:14:43.387298 2768494 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0821 11:14:43.387375 2768494 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0821 11:14:43.387386 2768494 kubeadm.go:322] 
	I0821 11:14:43.387469 2768494 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token lvr7jn.abzu5d6dy061zbg8 \
	I0821 11:14:43.387573 2768494 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:53df1391c07b454a6b96f5fce415fe23bfbfcda331215b828a9e1234aa2104c1 \
	I0821 11:14:43.387599 2768494 kubeadm.go:322]     --control-plane 
	I0821 11:14:43.387607 2768494 kubeadm.go:322] 
	I0821 11:14:43.387689 2768494 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0821 11:14:43.387698 2768494 kubeadm.go:322] 
	I0821 11:14:43.387775 2768494 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token lvr7jn.abzu5d6dy061zbg8 \
	I0821 11:14:43.387877 2768494 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:53df1391c07b454a6b96f5fce415fe23bfbfcda331215b828a9e1234aa2104c1 
	I0821 11:14:43.391114 2768494 kubeadm.go:322] W0821 11:14:22.536907    1229 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0821 11:14:43.391325 2768494 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1041-aws\n", err: exit status 1
	I0821 11:14:43.391429 2768494 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0821 11:14:43.391556 2768494 kubeadm.go:322] W0821 11:14:28.887442    1229 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0821 11:14:43.391676 2768494 kubeadm.go:322] W0821 11:14:28.888847    1229 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0821 11:14:43.391694 2768494 cni.go:84] Creating CNI manager for ""
	I0821 11:14:43.391702 2768494 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 11:14:43.394444 2768494 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0821 11:14:43.396982 2768494 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0821 11:14:43.402744 2768494 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0821 11:14:43.402766 2768494 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0821 11:14:43.427736 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0821 11:14:43.890035 2768494 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0821 11:14:43.890162 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:43.890229 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43 minikube.k8s.io/name=ingress-addon-legacy-354854 minikube.k8s.io/updated_at=2023_08_21T11_14_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:44.053909 2768494 ops.go:34] apiserver oom_adj: -16
	I0821 11:14:44.053995 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:44.160738 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:44.752246 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:45.251695 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:45.751684 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:46.252409 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:46.751912 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:47.252457 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:47.752524 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:48.252411 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:48.752310 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:49.252393 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:49.752418 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:50.251685 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:50.752637 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:51.252649 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:51.752145 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:52.251678 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:52.752153 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:53.252642 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:53.751700 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:54.252069 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:54.751684 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:55.251695 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:55.751678 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:56.252515 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:56.752207 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:57.252502 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:57.751693 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:58.251634 2768494 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:14:58.391051 2768494 kubeadm.go:1081] duration metric: took 14.500936528s to wait for elevateKubeSystemPrivileges.
	I0821 11:14:58.391087 2768494 kubeadm.go:406] StartCluster complete in 35.979337734s
	I0821 11:14:58.391114 2768494 settings.go:142] acquiring lock: {Name:mk3be5267b0ceee2c9bd00120994fcda13aa9019 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:14:58.391174 2768494 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:14:58.391847 2768494 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/kubeconfig: {Name:mk4bece1b106c2586469807b701290be2026992b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:14:58.392582 2768494 kapi.go:59] client config for ingress-addon-legacy-354854: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt", KeyFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.key", CAFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1721b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0821 11:14:58.394039 2768494 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0821 11:14:58.394106 2768494 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-354854"
	I0821 11:14:58.394120 2768494 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-354854"
	I0821 11:14:58.394176 2768494 host.go:66] Checking if "ingress-addon-legacy-354854" exists ...
	I0821 11:14:58.394177 2768494 config.go:182] Loaded profile config "ingress-addon-legacy-354854": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0821 11:14:58.394234 2768494 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0821 11:14:58.394619 2768494 cert_rotation.go:137] Starting client certificate rotation controller
	I0821 11:14:58.394629 2768494 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-354854 --format={{.State.Status}}
	I0821 11:14:58.394647 2768494 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-354854"
	I0821 11:14:58.394661 2768494 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-354854"
	I0821 11:14:58.394941 2768494 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-354854 --format={{.State.Status}}
	I0821 11:14:58.428655 2768494 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 11:14:58.434005 2768494 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0821 11:14:58.434062 2768494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0821 11:14:58.434254 2768494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-354854
	I0821 11:14:58.454683 2768494 kapi.go:59] client config for ingress-addon-legacy-354854: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt", KeyFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.key", CAFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1721b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0821 11:14:58.468986 2768494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36203 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/ingress-addon-legacy-354854/id_rsa Username:docker}
	I0821 11:14:58.474888 2768494 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-354854"
	I0821 11:14:58.474933 2768494 host.go:66] Checking if "ingress-addon-legacy-354854" exists ...
	I0821 11:14:58.475377 2768494 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-354854 --format={{.State.Status}}
	I0821 11:14:58.515507 2768494 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0821 11:14:58.515526 2768494 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0821 11:14:58.515585 2768494 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-354854
	I0821 11:14:58.537139 2768494 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-354854" context rescaled to 1 replicas
	I0821 11:14:58.537185 2768494 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0821 11:14:58.542461 2768494 out.go:177] * Verifying Kubernetes components...
	I0821 11:14:58.547246 2768494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 11:14:58.558034 2768494 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36203 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/ingress-addon-legacy-354854/id_rsa Username:docker}
	I0821 11:14:58.656495 2768494 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0821 11:14:58.657133 2768494 kapi.go:59] client config for ingress-addon-legacy-354854: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt", KeyFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.key", CAFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1721b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0821 11:14:58.657376 2768494 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-354854" to be "Ready" ...
	I0821 11:14:58.670261 2768494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0821 11:14:58.797552 2768494 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0821 11:14:59.073034 2768494 start.go:901] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0821 11:14:59.186247 2768494 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0821 11:14:59.188109 2768494 addons.go:502] enable addons completed in 794.066068ms: enabled=[storage-provisioner default-storageclass]
	I0821 11:15:00.687688 2768494 node_ready.go:58] node "ingress-addon-legacy-354854" has status "Ready":"False"
	I0821 11:15:02.688272 2768494 node_ready.go:58] node "ingress-addon-legacy-354854" has status "Ready":"False"
	I0821 11:15:05.188386 2768494 node_ready.go:58] node "ingress-addon-legacy-354854" has status "Ready":"False"
	I0821 11:15:07.188060 2768494 node_ready.go:49] node "ingress-addon-legacy-354854" has status "Ready":"True"
	I0821 11:15:07.188086 2768494 node_ready.go:38] duration metric: took 8.530692459s waiting for node "ingress-addon-legacy-354854" to be "Ready" ...
	I0821 11:15:07.188096 2768494 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 11:15:07.195101 2768494 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-f5nzs" in "kube-system" namespace to be "Ready" ...
	I0821 11:15:09.220374 2768494 pod_ready.go:102] pod "coredns-66bff467f8-f5nzs" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-08-21 11:15:09 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0821 11:15:11.706015 2768494 pod_ready.go:102] pod "coredns-66bff467f8-f5nzs" in "kube-system" namespace has status "Ready":"False"
	I0821 11:15:13.706346 2768494 pod_ready.go:102] pod "coredns-66bff467f8-f5nzs" in "kube-system" namespace has status "Ready":"False"
	I0821 11:15:16.205730 2768494 pod_ready.go:102] pod "coredns-66bff467f8-f5nzs" in "kube-system" namespace has status "Ready":"False"
	I0821 11:15:18.206279 2768494 pod_ready.go:92] pod "coredns-66bff467f8-f5nzs" in "kube-system" namespace has status "Ready":"True"
	I0821 11:15:18.206309 2768494 pod_ready.go:81] duration metric: took 11.011177467s waiting for pod "coredns-66bff467f8-f5nzs" in "kube-system" namespace to be "Ready" ...
	I0821 11:15:18.206320 2768494 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-354854" in "kube-system" namespace to be "Ready" ...
	I0821 11:15:18.211259 2768494 pod_ready.go:92] pod "etcd-ingress-addon-legacy-354854" in "kube-system" namespace has status "Ready":"True"
	I0821 11:15:18.211289 2768494 pod_ready.go:81] duration metric: took 4.9616ms waiting for pod "etcd-ingress-addon-legacy-354854" in "kube-system" namespace to be "Ready" ...
	I0821 11:15:18.211304 2768494 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-354854" in "kube-system" namespace to be "Ready" ...
	I0821 11:15:18.216326 2768494 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-354854" in "kube-system" namespace has status "Ready":"True"
	I0821 11:15:18.216355 2768494 pod_ready.go:81] duration metric: took 5.043182ms waiting for pod "kube-apiserver-ingress-addon-legacy-354854" in "kube-system" namespace to be "Ready" ...
	I0821 11:15:18.216369 2768494 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-354854" in "kube-system" namespace to be "Ready" ...
	I0821 11:15:18.221659 2768494 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-354854" in "kube-system" namespace has status "Ready":"True"
	I0821 11:15:18.221685 2768494 pod_ready.go:81] duration metric: took 5.301778ms waiting for pod "kube-controller-manager-ingress-addon-legacy-354854" in "kube-system" namespace to be "Ready" ...
	I0821 11:15:18.221698 2768494 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m7w4l" in "kube-system" namespace to be "Ready" ...
	I0821 11:15:18.226666 2768494 pod_ready.go:92] pod "kube-proxy-m7w4l" in "kube-system" namespace has status "Ready":"True"
	I0821 11:15:18.226693 2768494 pod_ready.go:81] duration metric: took 4.988192ms waiting for pod "kube-proxy-m7w4l" in "kube-system" namespace to be "Ready" ...
	I0821 11:15:18.226706 2768494 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-354854" in "kube-system" namespace to be "Ready" ...
	I0821 11:15:18.401320 2768494 request.go:629] Waited for 174.549359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-354854
	I0821 11:15:18.601330 2768494 request.go:629] Waited for 197.334317ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-354854
	I0821 11:15:18.603930 2768494 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-354854" in "kube-system" namespace has status "Ready":"True"
	I0821 11:15:18.603954 2768494 pod_ready.go:81] duration metric: took 377.239516ms waiting for pod "kube-scheduler-ingress-addon-legacy-354854" in "kube-system" namespace to be "Ready" ...
	I0821 11:15:18.603970 2768494 pod_ready.go:38] duration metric: took 11.415858801s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 11:15:18.603984 2768494 api_server.go:52] waiting for apiserver process to appear ...
	I0821 11:15:18.604045 2768494 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 11:15:18.617323 2768494 api_server.go:72] duration metric: took 20.080107602s to wait for apiserver process to appear ...
	I0821 11:15:18.617353 2768494 api_server.go:88] waiting for apiserver healthz status ...
	I0821 11:15:18.617375 2768494 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0821 11:15:18.626797 2768494 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0821 11:15:18.627769 2768494 api_server.go:141] control plane version: v1.18.20
	I0821 11:15:18.627790 2768494 api_server.go:131] duration metric: took 10.431054ms to wait for apiserver health ...
	I0821 11:15:18.627800 2768494 system_pods.go:43] waiting for kube-system pods to appear ...
	I0821 11:15:18.801207 2768494 request.go:629] Waited for 173.329863ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0821 11:15:18.807273 2768494 system_pods.go:59] 8 kube-system pods found
	I0821 11:15:18.807312 2768494 system_pods.go:61] "coredns-66bff467f8-f5nzs" [93d3fd6e-7a60-4457-b794-ee13381991f5] Running
	I0821 11:15:18.807319 2768494 system_pods.go:61] "etcd-ingress-addon-legacy-354854" [08930b25-44f0-49d8-ba31-1d93816981fa] Running
	I0821 11:15:18.807325 2768494 system_pods.go:61] "kindnet-v77bw" [e55d8e29-27dc-44ec-926f-1400711bafac] Running
	I0821 11:15:18.807357 2768494 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-354854" [c20b7f16-b79e-42d3-b6c8-ed9554022eee] Running
	I0821 11:15:18.807368 2768494 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-354854" [667cdbba-6a12-4a87-bdbf-611b4406a3fb] Running
	I0821 11:15:18.807378 2768494 system_pods.go:61] "kube-proxy-m7w4l" [eb9642f3-360b-45c1-90bb-924cf7bc4745] Running
	I0821 11:15:18.807383 2768494 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-354854" [b3fb903b-de67-4012-9e10-64696052e408] Running
	I0821 11:15:18.807398 2768494 system_pods.go:61] "storage-provisioner" [2d6283f1-0d3a-4f97-8c3f-71714ba12c10] Running
	I0821 11:15:18.807404 2768494 system_pods.go:74] duration metric: took 179.599515ms to wait for pod list to return data ...
	I0821 11:15:18.807413 2768494 default_sa.go:34] waiting for default service account to be created ...
	I0821 11:15:19.002014 2768494 request.go:629] Waited for 194.474501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0821 11:15:19.004837 2768494 default_sa.go:45] found service account: "default"
	I0821 11:15:19.004867 2768494 default_sa.go:55] duration metric: took 197.44309ms for default service account to be created ...
	I0821 11:15:19.004878 2768494 system_pods.go:116] waiting for k8s-apps to be running ...
	I0821 11:15:19.201216 2768494 request.go:629] Waited for 196.253108ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0821 11:15:19.207334 2768494 system_pods.go:86] 8 kube-system pods found
	I0821 11:15:19.207370 2768494 system_pods.go:89] "coredns-66bff467f8-f5nzs" [93d3fd6e-7a60-4457-b794-ee13381991f5] Running
	I0821 11:15:19.207378 2768494 system_pods.go:89] "etcd-ingress-addon-legacy-354854" [08930b25-44f0-49d8-ba31-1d93816981fa] Running
	I0821 11:15:19.207415 2768494 system_pods.go:89] "kindnet-v77bw" [e55d8e29-27dc-44ec-926f-1400711bafac] Running
	I0821 11:15:19.207429 2768494 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-354854" [c20b7f16-b79e-42d3-b6c8-ed9554022eee] Running
	I0821 11:15:19.207434 2768494 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-354854" [667cdbba-6a12-4a87-bdbf-611b4406a3fb] Running
	I0821 11:15:19.207439 2768494 system_pods.go:89] "kube-proxy-m7w4l" [eb9642f3-360b-45c1-90bb-924cf7bc4745] Running
	I0821 11:15:19.207445 2768494 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-354854" [b3fb903b-de67-4012-9e10-64696052e408] Running
	I0821 11:15:19.207455 2768494 system_pods.go:89] "storage-provisioner" [2d6283f1-0d3a-4f97-8c3f-71714ba12c10] Running
	I0821 11:15:19.207462 2768494 system_pods.go:126] duration metric: took 202.579267ms to wait for k8s-apps to be running ...
	I0821 11:15:19.207487 2768494 system_svc.go:44] waiting for kubelet service to be running ....
	I0821 11:15:19.207566 2768494 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 11:15:19.221640 2768494 system_svc.go:56] duration metric: took 14.135604ms WaitForService to wait for kubelet.
	I0821 11:15:19.221667 2768494 kubeadm.go:581] duration metric: took 20.684457008s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0821 11:15:19.221687 2768494 node_conditions.go:102] verifying NodePressure condition ...
	I0821 11:15:19.400943 2768494 request.go:629] Waited for 179.16946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0821 11:15:19.403996 2768494 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0821 11:15:19.404028 2768494 node_conditions.go:123] node cpu capacity is 2
	I0821 11:15:19.404039 2768494 node_conditions.go:105] duration metric: took 182.34698ms to run NodePressure ...
	I0821 11:15:19.404051 2768494 start.go:228] waiting for startup goroutines ...
	I0821 11:15:19.404061 2768494 start.go:233] waiting for cluster config update ...
	I0821 11:15:19.404072 2768494 start.go:242] writing updated cluster config ...
	I0821 11:15:19.404361 2768494 ssh_runner.go:195] Run: rm -f paused
	I0821 11:15:19.461234 2768494 start.go:600] kubectl: 1.28.0, cluster: 1.18.20 (minor skew: 10)
	I0821 11:15:19.463599 2768494 out.go:177] 
	W0821 11:15:19.465556 2768494 out.go:239] ! /usr/local/bin/kubectl is version 1.28.0, which may have incompatibilities with Kubernetes 1.18.20.
	I0821 11:15:19.467366 2768494 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0821 11:15:19.469131 2768494 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-354854" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Aug 21 11:18:28 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:28.714354995Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=18ea35c7-fb30-4a57-8d1e-aa2425beebe3 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 21 11:18:28 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:28.714518019Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:845f77fab71033404f4cfceaa1ddb27b70c3551ceb22a5e7f4498cdda6c9daea],Size_:28496999,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=18ea35c7-fb30-4a57-8d1e-aa2425beebe3 name=/runtime.v1alpha2.ImageService/ImageStatus
	Aug 21 11:18:28 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:28.715237636Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-x8fjv/hello-world-app" id=f88eb715-acfd-4868-ae5d-d97dcd76c6a6 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 21 11:18:28 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:28.715323402Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 21 11:18:28 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:28.816070750Z" level=info msg="Created container 5e730be28ce919d384a60c9cda8c9563375d49ddb28859e2b352f0462c362755: default/hello-world-app-5f5d8b66bb-x8fjv/hello-world-app" id=f88eb715-acfd-4868-ae5d-d97dcd76c6a6 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Aug 21 11:18:28 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:28.816925781Z" level=info msg="Starting container: 5e730be28ce919d384a60c9cda8c9563375d49ddb28859e2b352f0462c362755" id=e6e7c585-5ddf-47f3-b7d9-51e88d297ce2 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Aug 21 11:18:28 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:28.829680457Z" level=info msg="Started container" PID=3622 containerID=5e730be28ce919d384a60c9cda8c9563375d49ddb28859e2b352f0462c362755 description=default/hello-world-app-5f5d8b66bb-x8fjv/hello-world-app id=e6e7c585-5ddf-47f3-b7d9-51e88d297ce2 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=1d1d0363dcaf0c8822f5edc488ee50314faef7ebf83df735902047c9a8c0bd1f
	Aug 21 11:18:28 ingress-addon-legacy-354854 conmon[3611]: conmon 5e730be28ce919d384a6 <ninfo>: container 3622 exited with status 1
	Aug 21 11:18:29 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:29.230683722Z" level=info msg="Removing container: 1b3b35a1bd7a2ef23c3e5755137c1d4b1cbd4ee37739f9cb9806e9bdb9f821b5" id=d46a962a-97bd-4a31-8396-b3609f64bf80 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 21 11:18:29 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:29.255669408Z" level=info msg="Removed container 1b3b35a1bd7a2ef23c3e5755137c1d4b1cbd4ee37739f9cb9806e9bdb9f821b5: default/hello-world-app-5f5d8b66bb-x8fjv/hello-world-app" id=d46a962a-97bd-4a31-8396-b3609f64bf80 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Aug 21 11:18:29 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:29.637096905Z" level=warning msg="Stopping container e8d7e68ceb751344a225312ab89c82c35fe5f8f14d68c8dcdf5e48314e5280b4 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=904670be-9aed-482f-9ad2-5cdaeb8ef14a name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 21 11:18:29 ingress-addon-legacy-354854 conmon[2704]: conmon e8d7e68ceb751344a225 <ninfo>: container 2716 exited with status 137
	Aug 21 11:18:29 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:29.819313444Z" level=info msg="Stopped container e8d7e68ceb751344a225312ab89c82c35fe5f8f14d68c8dcdf5e48314e5280b4: ingress-nginx/ingress-nginx-controller-7fcf777cb7-54phf/controller" id=e303a30b-e84b-4542-aaff-45c0ee45a884 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 21 11:18:29 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:29.821810223Z" level=info msg="Stopped container e8d7e68ceb751344a225312ab89c82c35fe5f8f14d68c8dcdf5e48314e5280b4: ingress-nginx/ingress-nginx-controller-7fcf777cb7-54phf/controller" id=904670be-9aed-482f-9ad2-5cdaeb8ef14a name=/runtime.v1alpha2.RuntimeService/StopContainer
	Aug 21 11:18:29 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:29.822028574Z" level=info msg="Stopping pod sandbox: 20cce0f7039ba6cb1cadb38f1cdf50f973e5b112f0f9ea41e73249c7d1191a24" id=30c67c72-695e-4a50-9d87-d99e1552bae3 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 21 11:18:29 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:29.825230061Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-5QOGOEG6SOKMB3CG - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-IMPYPUM4O5AOQA7I - [0:0]\n-X KUBE-HP-IMPYPUM4O5AOQA7I\n-X KUBE-HP-5QOGOEG6SOKMB3CG\nCOMMIT\n"
	Aug 21 11:18:29 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:29.831451870Z" level=info msg="Stopping pod sandbox: 20cce0f7039ba6cb1cadb38f1cdf50f973e5b112f0f9ea41e73249c7d1191a24" id=be0452e8-ffb6-4484-9c06-4fa0ea75f046 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 21 11:18:29 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:29.831931770Z" level=info msg="Closing host port tcp:80"
	Aug 21 11:18:29 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:29.831976955Z" level=info msg="Closing host port tcp:443"
	Aug 21 11:18:29 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:29.833266324Z" level=info msg="Host port tcp:80 does not have an open socket"
	Aug 21 11:18:29 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:29.833290274Z" level=info msg="Host port tcp:443 does not have an open socket"
	Aug 21 11:18:29 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:29.833442624Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-54phf Namespace:ingress-nginx ID:20cce0f7039ba6cb1cadb38f1cdf50f973e5b112f0f9ea41e73249c7d1191a24 UID:259cf53f-4187-4314-a680-f1f7444c743a NetNS:/var/run/netns/31bef61d-9f72-4c98-a302-b1cd40195fad Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 21 11:18:29 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:29.833584791Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-54phf from CNI network \"kindnet\" (type=ptp)"
	Aug 21 11:18:29 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:29.859591799Z" level=info msg="Stopped pod sandbox: 20cce0f7039ba6cb1cadb38f1cdf50f973e5b112f0f9ea41e73249c7d1191a24" id=30c67c72-695e-4a50-9d87-d99e1552bae3 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Aug 21 11:18:29 ingress-addon-legacy-354854 crio[899]: time="2023-08-21 11:18:29.859702943Z" level=info msg="Stopped pod sandbox (already stopped): 20cce0f7039ba6cb1cadb38f1cdf50f973e5b112f0f9ea41e73249c7d1191a24" id=be0452e8-ffb6-4484-9c06-4fa0ea75f046 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	5e730be28ce91       13753a81eccfdd153bf7fc9a4c9198edbcce0110e7f46ed0d38cc654a6458ff5                                                   6 seconds ago       Exited              hello-world-app           2                   1d1d0363dcaf0       hello-world-app-5f5d8b66bb-x8fjv
	af9ff01834e61       docker.io/library/nginx@sha256:cac882be2b7305e0c8d3e3cd0575a2fd58f5fde6dd5d6299605aa0f3e67ca385                    2 minutes ago       Running             nginx                     0                   2b51f3153dafe       nginx
	e8d7e68ceb751       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   20cce0f7039ba       ingress-nginx-controller-7fcf777cb7-54phf
	98082366be3ba       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   fd2d6d730e639       ingress-nginx-admission-patch-wbtpc
	e464ff160edee       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   7ca1095e2a6c3       ingress-nginx-admission-create-ffnlc
	c56d23bd862aa       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   f9ef16da158b9       storage-provisioner
	c49fafc45cf55       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   aa093b8cf8296       coredns-66bff467f8-f5nzs
	090dd92ac7415       docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f                 3 minutes ago       Running             kindnet-cni               0                   9c09aced579c0       kindnet-v77bw
	b9c9636a3af9d       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   852e0b76b9f0e       kube-proxy-m7w4l
	fa0acd63c0a63       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   4 minutes ago       Running             kube-scheduler            0                   942a0c619659f       kube-scheduler-ingress-addon-legacy-354854
	0a900fee76269       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   4 minutes ago       Running             etcd                      0                   ba942c74c2c70       etcd-ingress-addon-legacy-354854
	ad0e8c0baf772       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   4 minutes ago       Running             kube-controller-manager   0                   2b8cbe9375054       kube-controller-manager-ingress-addon-legacy-354854
	7aac26cd42787       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   4 minutes ago       Running             kube-apiserver            0                   18a4efd1e1b30       kube-apiserver-ingress-addon-legacy-354854
	
	* 
	* ==> coredns [c49fafc45cf553e8cc5af0e343591cf556cbb43e4a9f1c61daa0a851ec3d0835] <==
	* [INFO] 10.244.0.5:58709 - 27445 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000158126s
	[INFO] 10.244.0.5:58709 - 34339 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001207509s
	[INFO] 10.244.0.5:43190 - 9079 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00191974s
	[INFO] 10.244.0.5:43190 - 61186 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001418868s
	[INFO] 10.244.0.5:43190 - 46776 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000130492s
	[INFO] 10.244.0.5:58709 - 19440 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001658174s
	[INFO] 10.244.0.5:58709 - 8791 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000046761s
	[INFO] 10.244.0.5:45395 - 479 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000081804s
	[INFO] 10.244.0.5:59966 - 17304 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000030842s
	[INFO] 10.244.0.5:59966 - 6732 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000041189s
	[INFO] 10.244.0.5:59966 - 23700 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031056s
	[INFO] 10.244.0.5:59966 - 64780 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030342s
	[INFO] 10.244.0.5:59966 - 12261 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000039441s
	[INFO] 10.244.0.5:59966 - 18638 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000032024s
	[INFO] 10.244.0.5:45395 - 50400 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000071326s
	[INFO] 10.244.0.5:45395 - 13087 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000138622s
	[INFO] 10.244.0.5:59966 - 29988 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001053985s
	[INFO] 10.244.0.5:45395 - 48546 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000056967s
	[INFO] 10.244.0.5:59966 - 49286 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000960605s
	[INFO] 10.244.0.5:45395 - 5952 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000054448s
	[INFO] 10.244.0.5:59966 - 62947 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000038597s
	[INFO] 10.244.0.5:45395 - 51490 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000053726s
	[INFO] 10.244.0.5:45395 - 29947 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000930492s
	[INFO] 10.244.0.5:45395 - 56385 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000952325s
	[INFO] 10.244.0.5:45395 - 36631 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00004379s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-354854
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-354854
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43
	                    minikube.k8s.io/name=ingress-addon-legacy-354854
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_21T11_14_43_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 21 Aug 2023 11:14:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-354854
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 21 Aug 2023 11:18:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 21 Aug 2023 11:18:16 +0000   Mon, 21 Aug 2023 11:14:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 21 Aug 2023 11:18:16 +0000   Mon, 21 Aug 2023 11:14:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 21 Aug 2023 11:18:16 +0000   Mon, 21 Aug 2023 11:14:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 21 Aug 2023 11:18:16 +0000   Mon, 21 Aug 2023 11:15:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-354854
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 9a0ecf05d670416891b800e6dfbdca50
	  System UUID:                999ee003-a505-415b-b581-daaa1b079529
	  Boot ID:                    02e315f4-a354-4b0b-b564-f929fd2e643c
	  Kernel Version:             5.15.0-1041-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-x8fjv                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  kube-system                 coredns-66bff467f8-f5nzs                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m37s
	  kube-system                 etcd-ingress-addon-legacy-354854                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kindnet-v77bw                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m37s
	  kube-system                 kube-apiserver-ingress-addon-legacy-354854             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-354854    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-proxy-m7w4l                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 kube-scheduler-ingress-addon-legacy-354854             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  4m3s (x5 over 4m3s)  kubelet     Node ingress-addon-legacy-354854 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s (x5 over 4m3s)  kubelet     Node ingress-addon-legacy-354854 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s (x4 over 4m3s)  kubelet     Node ingress-addon-legacy-354854 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m49s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s                kubelet     Node ingress-addon-legacy-354854 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s                kubelet     Node ingress-addon-legacy-354854 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s                kubelet     Node ingress-addon-legacy-354854 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m34s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m29s                kubelet     Node ingress-addon-legacy-354854 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001023] FS-Cache: O-key=[8] '9a4b5c0100000000'
	[  +0.000699] FS-Cache: N-cookie c=000000d2 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000916] FS-Cache: N-cookie d=00000000128a3fc5{9p.inode} n=00000000cd9d496d
	[  +0.001054] FS-Cache: N-key=[8] '9a4b5c0100000000'
	[  +0.002483] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=000000cc [p=000000c9 fl=226 nc=0 na=1]
	[  +0.000977] FS-Cache: O-cookie d=00000000128a3fc5{9p.inode} n=00000000aedceb4a
	[  +0.001055] FS-Cache: O-key=[8] '9a4b5c0100000000'
	[  +0.000715] FS-Cache: N-cookie c=000000d3 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000930] FS-Cache: N-cookie d=00000000128a3fc5{9p.inode} n=000000000adb5282
	[  +0.001063] FS-Cache: N-key=[8] '9a4b5c0100000000'
	[  +3.434482] FS-Cache: Duplicate cookie detected
	[  +0.000767] FS-Cache: O-cookie c=000000ca [p=000000c9 fl=226 nc=0 na=1]
	[  +0.000971] FS-Cache: O-cookie d=00000000128a3fc5{9p.inode} n=000000007708d8a6
	[  +0.001033] FS-Cache: O-key=[8] '994b5c0100000000'
	[  +0.000696] FS-Cache: N-cookie c=000000d5 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000936] FS-Cache: N-cookie d=00000000128a3fc5{9p.inode} n=000000006b8e342c
	[  +0.001064] FS-Cache: N-key=[8] '994b5c0100000000'
	[  +0.475929] FS-Cache: Duplicate cookie detected
	[  +0.000712] FS-Cache: O-cookie c=000000cf [p=000000c9 fl=226 nc=0 na=1]
	[  +0.000952] FS-Cache: O-cookie d=00000000128a3fc5{9p.inode} n=00000000939f1609
	[  +0.001035] FS-Cache: O-key=[8] '9f4b5c0100000000'
	[  +0.000728] FS-Cache: N-cookie c=000000d6 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000926] FS-Cache: N-cookie d=00000000128a3fc5{9p.inode} n=00000000fa5b2717
	[  +0.001032] FS-Cache: N-key=[8] '9f4b5c0100000000'
	
	* 
	* ==> etcd [0a900fee762699ccd4a631fb5f0cf1065612ca115c3abb40118f474356b4fe6e] <==
	* raft2023/08/21 11:14:33 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/08/21 11:14:33 INFO: aec36adc501070cc became follower at term 1
	raft2023/08/21 11:14:33 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-08-21 11:14:34.622176 W | auth: simple token is not cryptographically signed
	2023-08-21 11:14:34.747159 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-08-21 11:14:34.747690 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/08/21 11:14:34 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-08-21 11:14:34.748511 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-08-21 11:14:34.751480 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-08-21 11:14:34.751692 I | embed: listening for peers on 192.168.49.2:2380
	2023-08-21 11:14:34.751872 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/08/21 11:14:35 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/08/21 11:14:35 INFO: aec36adc501070cc became candidate at term 2
	raft2023/08/21 11:14:35 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/08/21 11:14:35 INFO: aec36adc501070cc became leader at term 2
	raft2023/08/21 11:14:35 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-08-21 11:14:35.096763 I | etcdserver: setting up the initial cluster version to 3.4
	2023-08-21 11:14:35.096856 I | etcdserver: published {Name:ingress-addon-legacy-354854 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-08-21 11:14:35.096886 I | embed: ready to serve client requests
	2023-08-21 11:14:35.261895 I | embed: ready to serve client requests
	2023-08-21 11:14:35.263227 I | embed: serving client requests on 192.168.49.2:2379
	2023-08-21 11:14:35.263292 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-08-21 11:14:35.274000 I | etcdserver/api: enabled capabilities for version 3.4
	2023-08-21 11:14:35.286264 W | etcdserver: request "ID:8128023262337489156 Method:\"PUT\" Path:\"/0/version\" Val:\"3.4.0\" " with result "" took too long (140.18909ms) to execute
	2023-08-21 11:14:35.570548 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  11:18:35 up 20:00,  0 users,  load average: 1.01, 1.29, 1.71
	Linux ingress-addon-legacy-354854 5.15.0-1041-aws #46~20.04.1-Ubuntu SMP Wed Jul 19 15:39:29 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [090dd92ac7415aedf91d619647ac63dfd98eeada22188a874162e3f4b4e601ae] <==
	* I0821 11:16:31.639272       1 main.go:227] handling current node
	I0821 11:16:41.651778       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:16:41.651807       1 main.go:227] handling current node
	I0821 11:16:51.663253       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:16:51.663281       1 main.go:227] handling current node
	I0821 11:17:01.670666       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:17:01.670693       1 main.go:227] handling current node
	I0821 11:17:11.679994       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:17:11.680023       1 main.go:227] handling current node
	I0821 11:17:21.684029       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:17:21.684056       1 main.go:227] handling current node
	I0821 11:17:31.692940       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:17:31.692971       1 main.go:227] handling current node
	I0821 11:17:41.696453       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:17:41.696480       1 main.go:227] handling current node
	I0821 11:17:51.707477       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:17:51.707503       1 main.go:227] handling current node
	I0821 11:18:01.710687       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:18:01.710714       1 main.go:227] handling current node
	I0821 11:18:11.722259       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:18:11.722291       1 main.go:227] handling current node
	I0821 11:18:21.733121       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:18:21.733149       1 main.go:227] handling current node
	I0821 11:18:31.745201       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0821 11:18:31.745227       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [7aac26cd427873b28f9646dbbf54bf8afd9665a73711b5dec6eed4603f314ce7] <==
	* I0821 11:14:40.224128       1 cache.go:39] Caches are synced for autoregister controller
	I0821 11:14:40.224982       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0821 11:14:40.231863       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0821 11:14:40.232434       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0821 11:14:40.310698       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0821 11:14:41.007690       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0821 11:14:41.007816       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0821 11:14:41.022435       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0821 11:14:41.029196       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0821 11:14:41.029222       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0821 11:14:41.395675       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0821 11:14:41.441925       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0821 11:14:41.581438       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0821 11:14:41.582542       1 controller.go:609] quota admission added evaluator for: endpoints
	I0821 11:14:41.586315       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0821 11:14:42.484553       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0821 11:14:43.268266       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0821 11:14:43.371240       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0821 11:14:46.670605       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0821 11:14:58.128937       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0821 11:14:58.214778       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0821 11:15:20.340799       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0821 11:15:48.752900       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0821 11:18:26.732026       1 watch.go:251] unable to encode watch object *v1.WatchEvent: client disconnected (&streaming.encoder{writer:(*http2.responseWriter)(0x40096aa5f0), encoder:(*versioning.codec)(0x400cf44f00), buf:(*bytes.Buffer)(0x40067061e0)})
	E0821 11:18:27.638973       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [ad0e8c0baf772569485347e25b23303c801b4285ac7e1b9bb449eb9023740552] <==
	* I0821 11:14:58.248305       1 range_allocator.go:172] Starting range CIDR allocator
	I0821 11:14:58.248333       1 shared_informer.go:223] Waiting for caches to sync for cidrallocator
	I0821 11:14:58.248375       1 shared_informer.go:230] Caches are synced for cidrallocator 
	I0821 11:14:58.251433       1 shared_informer.go:230] Caches are synced for GC 
	I0821 11:14:58.302362       1 range_allocator.go:373] Set node ingress-addon-legacy-354854 PodCIDR to [10.244.0.0/24]
	I0821 11:14:58.349451       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	I0821 11:14:58.406562       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I0821 11:14:58.432505       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0821 11:14:58.491787       1 shared_informer.go:230] Caches are synced for endpoint 
	I0821 11:14:58.507780       1 shared_informer.go:230] Caches are synced for resource quota 
	I0821 11:14:58.518340       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"f5d9a410-5108-475e-b7cb-f84a07d98825", APIVersion:"apps/v1", ResourceVersion:"368", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0821 11:14:58.535868       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0821 11:14:58.535957       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0821 11:14:58.555474       1 shared_informer.go:230] Caches are synced for resource quota 
	I0821 11:14:58.556537       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0821 11:14:58.632651       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"ff0c3424-5b4e-4097-8650-686be3f4a8e4", APIVersion:"apps/v1", ResourceVersion:"369", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-w8n88
	I0821 11:15:08.185075       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0821 11:15:20.305920       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"c42e0be7-abf9-4561-8b52-b27ba2b9b944", APIVersion:"apps/v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0821 11:15:20.317255       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"817ddcc7-2f6c-46ec-9e07-ac4dc36fe884", APIVersion:"apps/v1", ResourceVersion:"482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-54phf
	I0821 11:15:20.367136       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"6b266302-8439-4ffc-8e6b-cf29b9e771dd", APIVersion:"batch/v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-ffnlc
	I0821 11:15:20.399746       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"0d023bf4-b9ee-46dc-bb13-302183397132", APIVersion:"batch/v1", ResourceVersion:"499", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-wbtpc
	I0821 11:15:22.798010       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"6b266302-8439-4ffc-8e6b-cf29b9e771dd", APIVersion:"batch/v1", ResourceVersion:"498", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0821 11:15:23.794245       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"0d023bf4-b9ee-46dc-bb13-302183397132", APIVersion:"batch/v1", ResourceVersion:"505", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0821 11:18:10.118975       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"f2ab68da-4711-4f7e-9696-f2859802c53a", APIVersion:"apps/v1", ResourceVersion:"723", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0821 11:18:10.133179       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"98506a97-211b-4437-b4a9-aa06c18c64fa", APIVersion:"apps/v1", ResourceVersion:"724", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-x8fjv
	
	* 
	* ==> kube-proxy [b9c9636a3af9d90de1a5de68f372fbb63482be4064f780005bd277f913342ba9] <==
	* W0821 11:15:01.122146       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0821 11:15:01.133868       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0821 11:15:01.134105       1 server_others.go:186] Using iptables Proxier.
	I0821 11:15:01.134515       1 server.go:583] Version: v1.18.20
	I0821 11:15:01.137096       1 config.go:133] Starting endpoints config controller
	I0821 11:15:01.139817       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0821 11:15:01.138973       1 config.go:315] Starting service config controller
	I0821 11:15:01.141053       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0821 11:15:01.242761       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0821 11:15:01.242761       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [fa0acd63c0a63549ed14f70e36c8a421424e1ed719435fdd1b2c663eb5afe0be] <==
	* I0821 11:14:40.186710       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0821 11:14:40.186816       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0821 11:14:40.189175       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0821 11:14:40.189403       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0821 11:14:40.189452       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0821 11:14:40.189498       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0821 11:14:40.218239       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 11:14:40.233147       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0821 11:14:40.233327       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0821 11:14:40.233444       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0821 11:14:40.233543       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0821 11:14:40.233692       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0821 11:14:40.233829       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0821 11:14:40.233957       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0821 11:14:40.234059       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0821 11:14:40.234154       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0821 11:14:40.234258       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0821 11:14:40.234371       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0821 11:14:41.103796       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 11:14:41.189006       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0821 11:14:41.221629       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0821 11:14:41.269063       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0821 11:14:42.889663       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0821 11:14:58.196305       1 factory.go:503] pod: kube-system/coredns-66bff467f8-f5nzs is already present in the active queue
	E0821 11:14:59.191073       1 factory.go:503] pod: kube-system/storage-provisioner is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Aug 21 11:18:14 ingress-addon-legacy-354854 kubelet[1624]: I0821 11:18:14.204753    1624 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 1b3b35a1bd7a2ef23c3e5755137c1d4b1cbd4ee37739f9cb9806e9bdb9f821b5
	Aug 21 11:18:14 ingress-addon-legacy-354854 kubelet[1624]: E0821 11:18:14.204992    1624 pod_workers.go:191] Error syncing pod 6c388f19-9663-487c-81e6-735baba406d4 ("hello-world-app-5f5d8b66bb-x8fjv_default(6c388f19-9663-487c-81e6-735baba406d4)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-x8fjv_default(6c388f19-9663-487c-81e6-735baba406d4)"
	Aug 21 11:18:15 ingress-addon-legacy-354854 kubelet[1624]: I0821 11:18:15.207246    1624 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 1b3b35a1bd7a2ef23c3e5755137c1d4b1cbd4ee37739f9cb9806e9bdb9f821b5
	Aug 21 11:18:15 ingress-addon-legacy-354854 kubelet[1624]: E0821 11:18:15.207506    1624 pod_workers.go:191] Error syncing pod 6c388f19-9663-487c-81e6-735baba406d4 ("hello-world-app-5f5d8b66bb-x8fjv_default(6c388f19-9663-487c-81e6-735baba406d4)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-x8fjv_default(6c388f19-9663-487c-81e6-735baba406d4)"
	Aug 21 11:18:21 ingress-addon-legacy-354854 kubelet[1624]: E0821 11:18:21.713709    1624 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 21 11:18:21 ingress-addon-legacy-354854 kubelet[1624]: E0821 11:18:21.713769    1624 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 21 11:18:21 ingress-addon-legacy-354854 kubelet[1624]: E0821 11:18:21.713815    1624 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Aug 21 11:18:21 ingress-addon-legacy-354854 kubelet[1624]: E0821 11:18:21.713850    1624 pod_workers.go:191] Error syncing pod ba298955-a3f1-467c-a6e7-887d7c29034c ("kube-ingress-dns-minikube_kube-system(ba298955-a3f1-467c-a6e7-887d7c29034c)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Aug 21 11:18:26 ingress-addon-legacy-354854 kubelet[1624]: I0821 11:18:26.164178    1624 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-h7fxn" (UniqueName: "kubernetes.io/secret/ba298955-a3f1-467c-a6e7-887d7c29034c-minikube-ingress-dns-token-h7fxn") pod "ba298955-a3f1-467c-a6e7-887d7c29034c" (UID: "ba298955-a3f1-467c-a6e7-887d7c29034c")
	Aug 21 11:18:26 ingress-addon-legacy-354854 kubelet[1624]: I0821 11:18:26.168845    1624 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba298955-a3f1-467c-a6e7-887d7c29034c-minikube-ingress-dns-token-h7fxn" (OuterVolumeSpecName: "minikube-ingress-dns-token-h7fxn") pod "ba298955-a3f1-467c-a6e7-887d7c29034c" (UID: "ba298955-a3f1-467c-a6e7-887d7c29034c"). InnerVolumeSpecName "minikube-ingress-dns-token-h7fxn". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 21 11:18:26 ingress-addon-legacy-354854 kubelet[1624]: I0821 11:18:26.264538    1624 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-h7fxn" (UniqueName: "kubernetes.io/secret/ba298955-a3f1-467c-a6e7-887d7c29034c-minikube-ingress-dns-token-h7fxn") on node "ingress-addon-legacy-354854" DevicePath ""
	Aug 21 11:18:27 ingress-addon-legacy-354854 kubelet[1624]: W0821 11:18:27.224442    1624 pod_container_deletor.go:77] Container "09b9e1331521c472e7892449a1857c9db019ae3b08bc9606cc70cf4914c268d3" not found in pod's containers
	Aug 21 11:18:27 ingress-addon-legacy-354854 kubelet[1624]: E0821 11:18:27.619426    1624 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-54phf.177d61ed7e62ee7e", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-54phf", UID:"259cf53f-4187-4314-a680-f1f7444c743a", APIVersion:"v1", ResourceVersion:"486", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-354854"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc130efe0e4c2907e, ext:224421304644, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc130efe0e4c2907e, ext:224421304644, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-54phf.177d61ed7e62ee7e" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 21 11:18:27 ingress-addon-legacy-354854 kubelet[1624]: E0821 11:18:27.634751    1624 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-54phf.177d61ed7e62ee7e", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-54phf", UID:"259cf53f-4187-4314-a680-f1f7444c743a", APIVersion:"v1", ResourceVersion:"486", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-354854"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc130efe0e4c2907e, ext:224421304644, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc130efe0e54cef59, ext:224430372895, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-54phf.177d61ed7e62ee7e" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Aug 21 11:18:28 ingress-addon-legacy-354854 kubelet[1624]: I0821 11:18:28.712741    1624 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 1b3b35a1bd7a2ef23c3e5755137c1d4b1cbd4ee37739f9cb9806e9bdb9f821b5
	Aug 21 11:18:29 ingress-addon-legacy-354854 kubelet[1624]: I0821 11:18:29.228860    1624 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 1b3b35a1bd7a2ef23c3e5755137c1d4b1cbd4ee37739f9cb9806e9bdb9f821b5
	Aug 21 11:18:29 ingress-addon-legacy-354854 kubelet[1624]: I0821 11:18:29.229108    1624 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5e730be28ce919d384a60c9cda8c9563375d49ddb28859e2b352f0462c362755
	Aug 21 11:18:29 ingress-addon-legacy-354854 kubelet[1624]: E0821 11:18:29.229354    1624 pod_workers.go:191] Error syncing pod 6c388f19-9663-487c-81e6-735baba406d4 ("hello-world-app-5f5d8b66bb-x8fjv_default(6c388f19-9663-487c-81e6-735baba406d4)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-x8fjv_default(6c388f19-9663-487c-81e6-735baba406d4)"
	Aug 21 11:18:30 ingress-addon-legacy-354854 kubelet[1624]: I0821 11:18:30.176523    1624 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-5mwx2" (UniqueName: "kubernetes.io/secret/259cf53f-4187-4314-a680-f1f7444c743a-ingress-nginx-token-5mwx2") pod "259cf53f-4187-4314-a680-f1f7444c743a" (UID: "259cf53f-4187-4314-a680-f1f7444c743a")
	Aug 21 11:18:30 ingress-addon-legacy-354854 kubelet[1624]: I0821 11:18:30.176600    1624 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/259cf53f-4187-4314-a680-f1f7444c743a-webhook-cert") pod "259cf53f-4187-4314-a680-f1f7444c743a" (UID: "259cf53f-4187-4314-a680-f1f7444c743a")
	Aug 21 11:18:30 ingress-addon-legacy-354854 kubelet[1624]: I0821 11:18:30.183533    1624 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/259cf53f-4187-4314-a680-f1f7444c743a-ingress-nginx-token-5mwx2" (OuterVolumeSpecName: "ingress-nginx-token-5mwx2") pod "259cf53f-4187-4314-a680-f1f7444c743a" (UID: "259cf53f-4187-4314-a680-f1f7444c743a"). InnerVolumeSpecName "ingress-nginx-token-5mwx2". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 21 11:18:30 ingress-addon-legacy-354854 kubelet[1624]: I0821 11:18:30.186640    1624 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/259cf53f-4187-4314-a680-f1f7444c743a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "259cf53f-4187-4314-a680-f1f7444c743a" (UID: "259cf53f-4187-4314-a680-f1f7444c743a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Aug 21 11:18:30 ingress-addon-legacy-354854 kubelet[1624]: W0821 11:18:30.231949    1624 pod_container_deletor.go:77] Container "20cce0f7039ba6cb1cadb38f1cdf50f973e5b112f0f9ea41e73249c7d1191a24" not found in pod's containers
	Aug 21 11:18:30 ingress-addon-legacy-354854 kubelet[1624]: I0821 11:18:30.276918    1624 reconciler.go:319] Volume detached for volume "ingress-nginx-token-5mwx2" (UniqueName: "kubernetes.io/secret/259cf53f-4187-4314-a680-f1f7444c743a-ingress-nginx-token-5mwx2") on node "ingress-addon-legacy-354854" DevicePath ""
	Aug 21 11:18:30 ingress-addon-legacy-354854 kubelet[1624]: I0821 11:18:30.276957    1624 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/259cf53f-4187-4314-a680-f1f7444c743a-webhook-cert") on node "ingress-addon-legacy-354854" DevicePath ""
	
	* 
	* ==> storage-provisioner [c56d23bd862aa064eda90787d39e3a86834c23538a330ddbcad097bba8c935f6] <==
	* I0821 11:15:12.716548       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0821 11:15:12.728575       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0821 11:15:12.728669       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0821 11:15:12.735393       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0821 11:15:12.735649       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-354854_8346dbb3-926f-4477-be31-f7d906027237!
	I0821 11:15:12.736561       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"467d86fa-23e0-4b03-b24b-0cd89948c197", APIVersion:"v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-354854_8346dbb3-926f-4477-be31-f7d906027237 became leader
	I0821 11:15:12.835880       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-354854_8346dbb3-926f-4477-be31-f7d906027237!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-354854 -n ingress-addon-legacy-354854
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-354854 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (184.79s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994910 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994910 -- exec busybox-67b7f59bb-46dlp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994910 -- exec busybox-67b7f59bb-46dlp -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-994910 -- exec busybox-67b7f59bb-46dlp -- sh -c "ping -c 1 192.168.58.1": exit status 1 (227.540359ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-46dlp): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994910 -- exec busybox-67b7f59bb-zhpmt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994910 -- exec busybox-67b7f59bb-zhpmt -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-994910 -- exec busybox-67b7f59bb-zhpmt -- sh -c "ping -c 1 192.168.58.1": exit status 1 (237.423814ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-67b7f59bb-zhpmt): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-994910
helpers_test.go:235: (dbg) docker inspect multinode-994910:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "044a79616bc979dbd0194b96cc19bbb9942147959a549722da18d30526d96040",
	        "Created": "2023-08-21T11:25:01.671943173Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2805254,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-21T11:25:01.998280052Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f52519afe5f6d6f3ce84cbd7f651b1292638d32ca98ee43d88f2d69e113e44de",
	        "ResolvConfPath": "/var/lib/docker/containers/044a79616bc979dbd0194b96cc19bbb9942147959a549722da18d30526d96040/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/044a79616bc979dbd0194b96cc19bbb9942147959a549722da18d30526d96040/hostname",
	        "HostsPath": "/var/lib/docker/containers/044a79616bc979dbd0194b96cc19bbb9942147959a549722da18d30526d96040/hosts",
	        "LogPath": "/var/lib/docker/containers/044a79616bc979dbd0194b96cc19bbb9942147959a549722da18d30526d96040/044a79616bc979dbd0194b96cc19bbb9942147959a549722da18d30526d96040-json.log",
	        "Name": "/multinode-994910",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-994910:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-994910",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/99e1e3c8fa6bfae744673d274309e0272f35ac08eb7c4e9905f96f26bde406e6-init/diff:/var/lib/docker/overlay2/26861af3348249541ea382b8036362f60ea7ec122121fce2bcb8576e1879b2cb/diff",
	                "MergedDir": "/var/lib/docker/overlay2/99e1e3c8fa6bfae744673d274309e0272f35ac08eb7c4e9905f96f26bde406e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/99e1e3c8fa6bfae744673d274309e0272f35ac08eb7c4e9905f96f26bde406e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/99e1e3c8fa6bfae744673d274309e0272f35ac08eb7c4e9905f96f26bde406e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-994910",
	                "Source": "/var/lib/docker/volumes/multinode-994910/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-994910",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-994910",
	                "name.minikube.sigs.k8s.io": "multinode-994910",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5338c9db23859241f7bd6bb8395f14216f43fcf1ec5015f7f83657f01e882eaf",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36263"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36262"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36259"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36261"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36260"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5338c9db2385",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-994910": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "044a79616bc9",
	                        "multinode-994910"
	                    ],
	                    "NetworkID": "27268fd9dec2f8077428091c63045c368a354ccd383ddce8ac909daec50e4c45",
	                    "EndpointID": "1d77c5cc9def56d13816845fe7ecf1c8843c6b80b3928140b6566862b3b1b81a",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-994910 -n multinode-994910
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-994910 logs -n 25: (1.766764791s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-193723                           | mount-start-2-193723 | jenkins | v1.31.2 | 21 Aug 23 11:24 UTC | 21 Aug 23 11:24 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-193723 ssh -- ls                    | mount-start-2-193723 | jenkins | v1.31.2 | 21 Aug 23 11:24 UTC | 21 Aug 23 11:24 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-191844                           | mount-start-1-191844 | jenkins | v1.31.2 | 21 Aug 23 11:24 UTC | 21 Aug 23 11:24 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-193723 ssh -- ls                    | mount-start-2-193723 | jenkins | v1.31.2 | 21 Aug 23 11:24 UTC | 21 Aug 23 11:24 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-193723                           | mount-start-2-193723 | jenkins | v1.31.2 | 21 Aug 23 11:24 UTC | 21 Aug 23 11:24 UTC |
	| start   | -p mount-start-2-193723                           | mount-start-2-193723 | jenkins | v1.31.2 | 21 Aug 23 11:24 UTC | 21 Aug 23 11:24 UTC |
	| ssh     | mount-start-2-193723 ssh -- ls                    | mount-start-2-193723 | jenkins | v1.31.2 | 21 Aug 23 11:24 UTC | 21 Aug 23 11:24 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-193723                           | mount-start-2-193723 | jenkins | v1.31.2 | 21 Aug 23 11:24 UTC | 21 Aug 23 11:24 UTC |
	| delete  | -p mount-start-1-191844                           | mount-start-1-191844 | jenkins | v1.31.2 | 21 Aug 23 11:24 UTC | 21 Aug 23 11:24 UTC |
	| start   | -p multinode-994910                               | multinode-994910     | jenkins | v1.31.2 | 21 Aug 23 11:24 UTC | 21 Aug 23 11:27 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-994910 -- apply -f                   | multinode-994910     | jenkins | v1.31.2 | 21 Aug 23 11:27 UTC | 21 Aug 23 11:27 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-994910 -- rollout                    | multinode-994910     | jenkins | v1.31.2 | 21 Aug 23 11:27 UTC | 21 Aug 23 11:27 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-994910 -- get pods -o                | multinode-994910     | jenkins | v1.31.2 | 21 Aug 23 11:27 UTC | 21 Aug 23 11:27 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-994910 -- get pods -o                | multinode-994910     | jenkins | v1.31.2 | 21 Aug 23 11:27 UTC | 21 Aug 23 11:27 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-994910 -- exec                       | multinode-994910     | jenkins | v1.31.2 | 21 Aug 23 11:27 UTC | 21 Aug 23 11:27 UTC |
	|         | busybox-67b7f59bb-46dlp --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-994910 -- exec                       | multinode-994910     | jenkins | v1.31.2 | 21 Aug 23 11:27 UTC | 21 Aug 23 11:27 UTC |
	|         | busybox-67b7f59bb-zhpmt --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-994910 -- exec                       | multinode-994910     | jenkins | v1.31.2 | 21 Aug 23 11:27 UTC | 21 Aug 23 11:27 UTC |
	|         | busybox-67b7f59bb-46dlp --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-994910 -- exec                       | multinode-994910     | jenkins | v1.31.2 | 21 Aug 23 11:27 UTC | 21 Aug 23 11:27 UTC |
	|         | busybox-67b7f59bb-zhpmt --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-994910 -- exec                       | multinode-994910     | jenkins | v1.31.2 | 21 Aug 23 11:27 UTC | 21 Aug 23 11:27 UTC |
	|         | busybox-67b7f59bb-46dlp -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-994910 -- exec                       | multinode-994910     | jenkins | v1.31.2 | 21 Aug 23 11:27 UTC | 21 Aug 23 11:27 UTC |
	|         | busybox-67b7f59bb-zhpmt -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-994910 -- get pods -o                | multinode-994910     | jenkins | v1.31.2 | 21 Aug 23 11:27 UTC | 21 Aug 23 11:27 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-994910 -- exec                       | multinode-994910     | jenkins | v1.31.2 | 21 Aug 23 11:27 UTC | 21 Aug 23 11:27 UTC |
	|         | busybox-67b7f59bb-46dlp                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-994910 -- exec                       | multinode-994910     | jenkins | v1.31.2 | 21 Aug 23 11:27 UTC |                     |
	|         | busybox-67b7f59bb-46dlp -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-994910 -- exec                       | multinode-994910     | jenkins | v1.31.2 | 21 Aug 23 11:27 UTC | 21 Aug 23 11:27 UTC |
	|         | busybox-67b7f59bb-zhpmt                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-994910 -- exec                       | multinode-994910     | jenkins | v1.31.2 | 21 Aug 23 11:27 UTC |                     |
	|         | busybox-67b7f59bb-zhpmt -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 11:24:56
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 11:24:56.459310 2804799 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:24:56.459448 2804799 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:24:56.459455 2804799 out.go:309] Setting ErrFile to fd 2...
	I0821 11:24:56.459461 2804799 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:24:56.459690 2804799 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
	I0821 11:24:56.460069 2804799 out.go:303] Setting JSON to false
	I0821 11:24:56.461178 2804799 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":72440,"bootTime":1692544656,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0821 11:24:56.461252 2804799 start.go:138] virtualization:  
	I0821 11:24:56.464709 2804799 out.go:177] * [multinode-994910] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0821 11:24:56.468364 2804799 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 11:24:56.471010 2804799 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 11:24:56.468581 2804799 notify.go:220] Checking for updates...
	I0821 11:24:56.477786 2804799 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:24:56.480223 2804799 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	I0821 11:24:56.483182 2804799 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0821 11:24:56.485308 2804799 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 11:24:56.487763 2804799 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 11:24:56.511567 2804799 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 11:24:56.511700 2804799 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:24:56.595428 2804799 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:35 SystemTime:2023-08-21 11:24:56.585636208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:24:56.595572 2804799 docker.go:294] overlay module found
	I0821 11:24:56.600014 2804799 out.go:177] * Using the docker driver based on user configuration
	I0821 11:24:56.602596 2804799 start.go:298] selected driver: docker
	I0821 11:24:56.602618 2804799 start.go:902] validating driver "docker" against <nil>
	I0821 11:24:56.602640 2804799 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 11:24:56.603254 2804799 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:24:56.673436 2804799 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:35 SystemTime:2023-08-21 11:24:56.664188453 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:24:56.673631 2804799 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 11:24:56.673854 2804799 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0821 11:24:56.676370 2804799 out.go:177] * Using Docker driver with root privileges
	I0821 11:24:56.678906 2804799 cni.go:84] Creating CNI manager for ""
	I0821 11:24:56.678925 2804799 cni.go:136] 0 nodes found, recommending kindnet
	I0821 11:24:56.678934 2804799 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0821 11:24:56.678953 2804799 start_flags.go:319] config:
	{Name:multinode-994910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-994910 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlu
gin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:24:56.683296 2804799 out.go:177] * Starting control plane node multinode-994910 in cluster multinode-994910
	I0821 11:24:56.686352 2804799 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 11:24:56.689173 2804799 out.go:177] * Pulling base image ...
	I0821 11:24:56.691230 2804799 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 11:24:56.691277 2804799 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4
	I0821 11:24:56.691289 2804799 cache.go:57] Caching tarball of preloaded images
	I0821 11:24:56.691294 2804799 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0821 11:24:56.691382 2804799 preload.go:174] Found /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0821 11:24:56.691392 2804799 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0821 11:24:56.691796 2804799 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/config.json ...
	I0821 11:24:56.691828 2804799 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/config.json: {Name:mk2a8f7402bc05b0d80e834d7d1f90a4fc87abcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:24:56.707671 2804799 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0821 11:24:56.707696 2804799 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0821 11:24:56.707721 2804799 cache.go:195] Successfully downloaded all kic artifacts
	I0821 11:24:56.707774 2804799 start.go:365] acquiring machines lock for multinode-994910: {Name:mkd9df436be09a907e906ad059b321fec6ebcdc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:24:56.707895 2804799 start.go:369] acquired machines lock for "multinode-994910" in 102.628µs
	I0821 11:24:56.707920 2804799 start.go:93] Provisioning new machine with config: &{Name:multinode-994910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-994910 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0821 11:24:56.708023 2804799 start.go:125] createHost starting for "" (driver="docker")
	I0821 11:24:56.711086 2804799 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0821 11:24:56.711333 2804799 start.go:159] libmachine.API.Create for "multinode-994910" (driver="docker")
	I0821 11:24:56.711361 2804799 client.go:168] LocalClient.Create starting
	I0821 11:24:56.711436 2804799 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem
	I0821 11:24:56.711470 2804799 main.go:141] libmachine: Decoding PEM data...
	I0821 11:24:56.711484 2804799 main.go:141] libmachine: Parsing certificate...
	I0821 11:24:56.711544 2804799 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem
	I0821 11:24:56.711560 2804799 main.go:141] libmachine: Decoding PEM data...
	I0821 11:24:56.711571 2804799 main.go:141] libmachine: Parsing certificate...
	I0821 11:24:56.711938 2804799 cli_runner.go:164] Run: docker network inspect multinode-994910 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0821 11:24:56.728952 2804799 cli_runner.go:211] docker network inspect multinode-994910 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0821 11:24:56.729034 2804799 network_create.go:281] running [docker network inspect multinode-994910] to gather additional debugging logs...
	I0821 11:24:56.729054 2804799 cli_runner.go:164] Run: docker network inspect multinode-994910
	W0821 11:24:56.746339 2804799 cli_runner.go:211] docker network inspect multinode-994910 returned with exit code 1
	I0821 11:24:56.746372 2804799 network_create.go:284] error running [docker network inspect multinode-994910]: docker network inspect multinode-994910: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-994910 not found
	I0821 11:24:56.746385 2804799 network_create.go:286] output of [docker network inspect multinode-994910]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-994910 not found
	
	** /stderr **
	I0821 11:24:56.746453 2804799 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0821 11:24:56.767072 2804799 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b94741280122 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:51:cd:3e:84} reservation:<nil>}
	I0821 11:24:56.767498 2804799 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40010fdcf0}
	I0821 11:24:56.767521 2804799 network_create.go:123] attempt to create docker network multinode-994910 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0821 11:24:56.767599 2804799 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-994910 multinode-994910
	I0821 11:24:56.842682 2804799 network_create.go:107] docker network multinode-994910 192.168.58.0/24 created
	I0821 11:24:56.842714 2804799 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-994910" container
	I0821 11:24:56.842799 2804799 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0821 11:24:56.858824 2804799 cli_runner.go:164] Run: docker volume create multinode-994910 --label name.minikube.sigs.k8s.io=multinode-994910 --label created_by.minikube.sigs.k8s.io=true
	I0821 11:24:56.876989 2804799 oci.go:103] Successfully created a docker volume multinode-994910
	I0821 11:24:56.877081 2804799 cli_runner.go:164] Run: docker run --rm --name multinode-994910-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-994910 --entrypoint /usr/bin/test -v multinode-994910:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0821 11:24:57.447100 2804799 oci.go:107] Successfully prepared a docker volume multinode-994910
	I0821 11:24:57.447145 2804799 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 11:24:57.447168 2804799 kic.go:190] Starting extracting preloaded images to volume ...
	I0821 11:24:57.447264 2804799 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-994910:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0821 11:25:01.574599 2804799 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-994910:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.127282213s)
	I0821 11:25:01.574632 2804799 kic.go:199] duration metric: took 4.127464 seconds to extract preloaded images to volume
	W0821 11:25:01.574785 2804799 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0821 11:25:01.574899 2804799 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0821 11:25:01.656259 2804799 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-994910 --name multinode-994910 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-994910 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-994910 --network multinode-994910 --ip 192.168.58.2 --volume multinode-994910:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0821 11:25:02.010324 2804799 cli_runner.go:164] Run: docker container inspect multinode-994910 --format={{.State.Running}}
	I0821 11:25:02.037218 2804799 cli_runner.go:164] Run: docker container inspect multinode-994910 --format={{.State.Status}}
	I0821 11:25:02.061338 2804799 cli_runner.go:164] Run: docker exec multinode-994910 stat /var/lib/dpkg/alternatives/iptables
	I0821 11:25:02.167929 2804799 oci.go:144] the created container "multinode-994910" has a running status.
	I0821 11:25:02.167958 2804799 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910/id_rsa...
	I0821 11:25:02.607862 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0821 11:25:02.607912 2804799 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0821 11:25:02.638982 2804799 cli_runner.go:164] Run: docker container inspect multinode-994910 --format={{.State.Status}}
	I0821 11:25:02.661971 2804799 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0821 11:25:02.661993 2804799 kic_runner.go:114] Args: [docker exec --privileged multinode-994910 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0821 11:25:02.749319 2804799 cli_runner.go:164] Run: docker container inspect multinode-994910 --format={{.State.Status}}
	I0821 11:25:02.776404 2804799 machine.go:88] provisioning docker machine ...
	I0821 11:25:02.776435 2804799 ubuntu.go:169] provisioning hostname "multinode-994910"
	I0821 11:25:02.776514 2804799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910
	I0821 11:25:02.806708 2804799 main.go:141] libmachine: Using SSH client type: native
	I0821 11:25:02.807176 2804799 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36263 <nil> <nil>}
	I0821 11:25:02.807202 2804799 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-994910 && echo "multinode-994910" | sudo tee /etc/hostname
	I0821 11:25:03.028654 2804799 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-994910
	
	I0821 11:25:03.028769 2804799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910
	I0821 11:25:03.055381 2804799 main.go:141] libmachine: Using SSH client type: native
	I0821 11:25:03.055824 2804799 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36263 <nil> <nil>}
	I0821 11:25:03.055850 2804799 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-994910' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-994910/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-994910' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 11:25:03.194910 2804799 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 11:25:03.194984 2804799 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17102-2734539/.minikube CaCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17102-2734539/.minikube}
	I0821 11:25:03.195020 2804799 ubuntu.go:177] setting up certificates
	I0821 11:25:03.195054 2804799 provision.go:83] configureAuth start
	I0821 11:25:03.195157 2804799 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-994910
	I0821 11:25:03.220577 2804799 provision.go:138] copyHostCerts
	I0821 11:25:03.220614 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem
	I0821 11:25:03.220645 2804799 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem, removing ...
	I0821 11:25:03.220652 2804799 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem
	I0821 11:25:03.220721 2804799 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem (1078 bytes)
	I0821 11:25:03.220794 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem
	I0821 11:25:03.220810 2804799 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem, removing ...
	I0821 11:25:03.220814 2804799 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem
	I0821 11:25:03.220844 2804799 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem (1123 bytes)
	I0821 11:25:03.220890 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem
	I0821 11:25:03.220904 2804799 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem, removing ...
	I0821 11:25:03.220908 2804799 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem
	I0821 11:25:03.220930 2804799 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem (1675 bytes)
	I0821 11:25:03.220979 2804799 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca-key.pem org=jenkins.multinode-994910 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-994910]
	I0821 11:25:03.790875 2804799 provision.go:172] copyRemoteCerts
	I0821 11:25:03.790945 2804799 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 11:25:03.790986 2804799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910
	I0821 11:25:03.813227 2804799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36263 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910/id_rsa Username:docker}
	I0821 11:25:03.908508 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0821 11:25:03.908564 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 11:25:03.937329 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0821 11:25:03.937401 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0821 11:25:03.965153 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0821 11:25:03.965213 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0821 11:25:03.993673 2804799 provision.go:86] duration metric: configureAuth took 798.58642ms
	I0821 11:25:03.993697 2804799 ubuntu.go:193] setting minikube options for container-runtime
	I0821 11:25:03.993917 2804799 config.go:182] Loaded profile config "multinode-994910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 11:25:03.994026 2804799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910
	I0821 11:25:04.012565 2804799 main.go:141] libmachine: Using SSH client type: native
	I0821 11:25:04.013040 2804799 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36263 <nil> <nil>}
	I0821 11:25:04.013064 2804799 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0821 11:25:04.259344 2804799 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0821 11:25:04.259367 2804799 machine.go:91] provisioned docker machine in 1.482942059s
	I0821 11:25:04.259376 2804799 client.go:171] LocalClient.Create took 7.548006538s
	I0821 11:25:04.259388 2804799 start.go:167] duration metric: libmachine.API.Create for "multinode-994910" took 7.54805644s
	I0821 11:25:04.259395 2804799 start.go:300] post-start starting for "multinode-994910" (driver="docker")
	I0821 11:25:04.259404 2804799 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 11:25:04.259478 2804799 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 11:25:04.259528 2804799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910
	I0821 11:25:04.279537 2804799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36263 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910/id_rsa Username:docker}
	I0821 11:25:04.376764 2804799 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 11:25:04.380579 2804799 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0821 11:25:04.380597 2804799 command_runner.go:130] > NAME="Ubuntu"
	I0821 11:25:04.380605 2804799 command_runner.go:130] > VERSION_ID="22.04"
	I0821 11:25:04.380612 2804799 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0821 11:25:04.380617 2804799 command_runner.go:130] > VERSION_CODENAME=jammy
	I0821 11:25:04.380622 2804799 command_runner.go:130] > ID=ubuntu
	I0821 11:25:04.380626 2804799 command_runner.go:130] > ID_LIKE=debian
	I0821 11:25:04.380631 2804799 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0821 11:25:04.380637 2804799 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0821 11:25:04.380644 2804799 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0821 11:25:04.380651 2804799 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0821 11:25:04.380656 2804799 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0821 11:25:04.380696 2804799 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0821 11:25:04.380719 2804799 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0821 11:25:04.380729 2804799 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0821 11:25:04.380735 2804799 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0821 11:25:04.380745 2804799 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-2734539/.minikube/addons for local assets ...
	I0821 11:25:04.380801 2804799 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-2734539/.minikube/files for local assets ...
	I0821 11:25:04.380878 2804799 filesync.go:149] local asset: /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem -> 27399302.pem in /etc/ssl/certs
	I0821 11:25:04.380884 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem -> /etc/ssl/certs/27399302.pem
	I0821 11:25:04.380983 2804799 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0821 11:25:04.391099 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem --> /etc/ssl/certs/27399302.pem (1708 bytes)
	I0821 11:25:04.418871 2804799 start.go:303] post-start completed in 159.460883ms
	I0821 11:25:04.419238 2804799 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-994910
	I0821 11:25:04.436199 2804799 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/config.json ...
	I0821 11:25:04.436473 2804799 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 11:25:04.436521 2804799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910
	I0821 11:25:04.456345 2804799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36263 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910/id_rsa Username:docker}
	I0821 11:25:04.547800 2804799 command_runner.go:130] > 17%!
	(MISSING)I0821 11:25:04.547892 2804799 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0821 11:25:04.553410 2804799 command_runner.go:130] > 162G
	I0821 11:25:04.553454 2804799 start.go:128] duration metric: createHost completed in 7.845422514s
	I0821 11:25:04.553466 2804799 start.go:83] releasing machines lock for "multinode-994910", held for 7.84556368s
	I0821 11:25:04.553557 2804799 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-994910
	I0821 11:25:04.570552 2804799 ssh_runner.go:195] Run: cat /version.json
	I0821 11:25:04.570570 2804799 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 11:25:04.570607 2804799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910
	I0821 11:25:04.570612 2804799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910
	I0821 11:25:04.592232 2804799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36263 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910/id_rsa Username:docker}
	I0821 11:25:04.601983 2804799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36263 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910/id_rsa Username:docker}
	I0821 11:25:04.809991 2804799 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0821 11:25:04.810070 2804799 command_runner.go:130] > {"iso_version": "v1.30.1-1689243309-16875", "kicbase_version": "v0.0.40", "minikube_version": "v1.31.0", "commit": "085433cd1b734742870dea5be8f9ee2ce4c54148"}
	I0821 11:25:04.810216 2804799 ssh_runner.go:195] Run: systemctl --version
	I0821 11:25:04.815405 2804799 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.9)
	I0821 11:25:04.815441 2804799 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0821 11:25:04.815801 2804799 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0821 11:25:04.960453 2804799 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0821 11:25:04.965525 2804799 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0821 11:25:04.965550 2804799 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0821 11:25:04.965558 2804799 command_runner.go:130] > Device: 36h/54d	Inode: 5709935     Links: 1
	I0821 11:25:04.965566 2804799 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0821 11:25:04.965581 2804799 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0821 11:25:04.965601 2804799 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0821 11:25:04.965608 2804799 command_runner.go:130] > Change: 2023-08-21 11:02:38.566259907 +0000
	I0821 11:25:04.965617 2804799 command_runner.go:130] >  Birth: 2023-08-21 11:02:38.566259907 +0000
	I0821 11:25:04.965861 2804799 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:25:04.990857 2804799 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0821 11:25:04.990959 2804799 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:25:05.031647 2804799 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0821 11:25:05.031683 2804799 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0821 11:25:05.031691 2804799 start.go:466] detecting cgroup driver to use...
	I0821 11:25:05.031734 2804799 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0821 11:25:05.031790 2804799 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 11:25:05.051096 2804799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 11:25:05.065088 2804799 docker.go:196] disabling cri-docker service (if available) ...
	I0821 11:25:05.065193 2804799 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0821 11:25:05.081738 2804799 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0821 11:25:05.100231 2804799 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0821 11:25:05.205954 2804799 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0821 11:25:05.222706 2804799 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0821 11:25:05.311199 2804799 docker.go:212] disabling docker service ...
	I0821 11:25:05.311271 2804799 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0821 11:25:05.335126 2804799 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0821 11:25:05.349419 2804799 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0821 11:25:05.454926 2804799 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0821 11:25:05.455004 2804799 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0821 11:25:05.559759 2804799 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0821 11:25:05.559840 2804799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0821 11:25:05.572681 2804799 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 11:25:05.590905 2804799 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0821 11:25:05.592243 2804799 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0821 11:25:05.592309 2804799 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:25:05.604544 2804799 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0821 11:25:05.604622 2804799 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:25:05.617182 2804799 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:25:05.629396 2804799 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:25:05.641613 2804799 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0821 11:25:05.652729 2804799 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0821 11:25:05.661851 2804799 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0821 11:25:05.663030 2804799 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0821 11:25:05.673382 2804799 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 11:25:05.778186 2804799 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0821 11:25:05.901172 2804799 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0821 11:25:05.901254 2804799 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0821 11:25:05.905803 2804799 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0821 11:25:05.905863 2804799 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0821 11:25:05.905916 2804799 command_runner.go:130] > Device: 43h/67d	Inode: 186         Links: 1
	I0821 11:25:05.905929 2804799 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0821 11:25:05.905936 2804799 command_runner.go:130] > Access: 2023-08-21 11:25:05.883454229 +0000
	I0821 11:25:05.905943 2804799 command_runner.go:130] > Modify: 2023-08-21 11:25:05.883454229 +0000
	I0821 11:25:05.905949 2804799 command_runner.go:130] > Change: 2023-08-21 11:25:05.883454229 +0000
	I0821 11:25:05.905957 2804799 command_runner.go:130] >  Birth: -
	I0821 11:25:05.905986 2804799 start.go:534] Will wait 60s for crictl version
	I0821 11:25:05.906041 2804799 ssh_runner.go:195] Run: which crictl
	I0821 11:25:05.910289 2804799 command_runner.go:130] > /usr/bin/crictl
	I0821 11:25:05.910356 2804799 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0821 11:25:05.948674 2804799 command_runner.go:130] > Version:  0.1.0
	I0821 11:25:05.948694 2804799 command_runner.go:130] > RuntimeName:  cri-o
	I0821 11:25:05.948700 2804799 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0821 11:25:05.948706 2804799 command_runner.go:130] > RuntimeApiVersion:  v1
	I0821 11:25:05.951253 2804799 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0821 11:25:05.951345 2804799 ssh_runner.go:195] Run: crio --version
	I0821 11:25:05.991443 2804799 command_runner.go:130] > crio version 1.24.6
	I0821 11:25:05.991463 2804799 command_runner.go:130] > Version:          1.24.6
	I0821 11:25:05.991473 2804799 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0821 11:25:05.991478 2804799 command_runner.go:130] > GitTreeState:     clean
	I0821 11:25:05.991485 2804799 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0821 11:25:05.991490 2804799 command_runner.go:130] > GoVersion:        go1.18.2
	I0821 11:25:05.991496 2804799 command_runner.go:130] > Compiler:         gc
	I0821 11:25:05.991502 2804799 command_runner.go:130] > Platform:         linux/arm64
	I0821 11:25:05.991512 2804799 command_runner.go:130] > Linkmode:         dynamic
	I0821 11:25:05.991521 2804799 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0821 11:25:05.991530 2804799 command_runner.go:130] > SeccompEnabled:   true
	I0821 11:25:05.991535 2804799 command_runner.go:130] > AppArmorEnabled:  false
	I0821 11:25:05.994183 2804799 ssh_runner.go:195] Run: crio --version
	I0821 11:25:06.048312 2804799 command_runner.go:130] > crio version 1.24.6
	I0821 11:25:06.048336 2804799 command_runner.go:130] > Version:          1.24.6
	I0821 11:25:06.048346 2804799 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0821 11:25:06.048352 2804799 command_runner.go:130] > GitTreeState:     clean
	I0821 11:25:06.048358 2804799 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0821 11:25:06.048363 2804799 command_runner.go:130] > GoVersion:        go1.18.2
	I0821 11:25:06.048369 2804799 command_runner.go:130] > Compiler:         gc
	I0821 11:25:06.048374 2804799 command_runner.go:130] > Platform:         linux/arm64
	I0821 11:25:06.048382 2804799 command_runner.go:130] > Linkmode:         dynamic
	I0821 11:25:06.048395 2804799 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0821 11:25:06.048403 2804799 command_runner.go:130] > SeccompEnabled:   true
	I0821 11:25:06.048411 2804799 command_runner.go:130] > AppArmorEnabled:  false
	I0821 11:25:06.050582 2804799 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	I0821 11:25:06.052369 2804799 cli_runner.go:164] Run: docker network inspect multinode-994910 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0821 11:25:06.072479 2804799 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0821 11:25:06.077385 2804799 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 11:25:06.091122 2804799 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 11:25:06.091196 2804799 ssh_runner.go:195] Run: sudo crictl images --output json
	I0821 11:25:06.155490 2804799 command_runner.go:130] > {
	I0821 11:25:06.155509 2804799 command_runner.go:130] >   "images": [
	I0821 11:25:06.155515 2804799 command_runner.go:130] >     {
	I0821 11:25:06.155524 2804799 command_runner.go:130] >       "id": "b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79",
	I0821 11:25:06.155529 2804799 command_runner.go:130] >       "repoTags": [
	I0821 11:25:06.155536 2804799 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0821 11:25:06.155541 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.155548 2804799 command_runner.go:130] >       "repoDigests": [
	I0821 11:25:06.155559 2804799 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f",
	I0821 11:25:06.155568 2804799 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
	I0821 11:25:06.155572 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.155577 2804799 command_runner.go:130] >       "size": "60881430",
	I0821 11:25:06.155584 2804799 command_runner.go:130] >       "uid": null,
	I0821 11:25:06.155588 2804799 command_runner.go:130] >       "username": "",
	I0821 11:25:06.155597 2804799 command_runner.go:130] >       "spec": null,
	I0821 11:25:06.155602 2804799 command_runner.go:130] >       "pinned": false
	I0821 11:25:06.155606 2804799 command_runner.go:130] >     },
	I0821 11:25:06.155610 2804799 command_runner.go:130] >     {
	I0821 11:25:06.155620 2804799 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0821 11:25:06.155625 2804799 command_runner.go:130] >       "repoTags": [
	I0821 11:25:06.155631 2804799 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0821 11:25:06.155635 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.155640 2804799 command_runner.go:130] >       "repoDigests": [
	I0821 11:25:06.155649 2804799 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0821 11:25:06.155659 2804799 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0821 11:25:06.155664 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.155670 2804799 command_runner.go:130] >       "size": "29037500",
	I0821 11:25:06.155675 2804799 command_runner.go:130] >       "uid": null,
	I0821 11:25:06.155680 2804799 command_runner.go:130] >       "username": "",
	I0821 11:25:06.155685 2804799 command_runner.go:130] >       "spec": null,
	I0821 11:25:06.155690 2804799 command_runner.go:130] >       "pinned": false
	I0821 11:25:06.155694 2804799 command_runner.go:130] >     },
	I0821 11:25:06.155699 2804799 command_runner.go:130] >     {
	I0821 11:25:06.155707 2804799 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0821 11:25:06.155711 2804799 command_runner.go:130] >       "repoTags": [
	I0821 11:25:06.155718 2804799 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0821 11:25:06.155722 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.155727 2804799 command_runner.go:130] >       "repoDigests": [
	I0821 11:25:06.155736 2804799 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0821 11:25:06.155745 2804799 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0821 11:25:06.155750 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.155755 2804799 command_runner.go:130] >       "size": "51393451",
	I0821 11:25:06.155760 2804799 command_runner.go:130] >       "uid": null,
	I0821 11:25:06.155765 2804799 command_runner.go:130] >       "username": "",
	I0821 11:25:06.155770 2804799 command_runner.go:130] >       "spec": null,
	I0821 11:25:06.155779 2804799 command_runner.go:130] >       "pinned": false
	I0821 11:25:06.155783 2804799 command_runner.go:130] >     },
	I0821 11:25:06.155787 2804799 command_runner.go:130] >     {
	I0821 11:25:06.155795 2804799 command_runner.go:130] >       "id": "24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737",
	I0821 11:25:06.155799 2804799 command_runner.go:130] >       "repoTags": [
	I0821 11:25:06.155806 2804799 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0821 11:25:06.155810 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.155815 2804799 command_runner.go:130] >       "repoDigests": [
	I0821 11:25:06.155825 2804799 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd",
	I0821 11:25:06.155834 2804799 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"
	I0821 11:25:06.155846 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.155851 2804799 command_runner.go:130] >       "size": "182283991",
	I0821 11:25:06.155856 2804799 command_runner.go:130] >       "uid": {
	I0821 11:25:06.155861 2804799 command_runner.go:130] >         "value": "0"
	I0821 11:25:06.155865 2804799 command_runner.go:130] >       },
	I0821 11:25:06.155870 2804799 command_runner.go:130] >       "username": "",
	I0821 11:25:06.155874 2804799 command_runner.go:130] >       "spec": null,
	I0821 11:25:06.155880 2804799 command_runner.go:130] >       "pinned": false
	I0821 11:25:06.155884 2804799 command_runner.go:130] >     },
	I0821 11:25:06.155888 2804799 command_runner.go:130] >     {
	I0821 11:25:06.155895 2804799 command_runner.go:130] >       "id": "64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388",
	I0821 11:25:06.155900 2804799 command_runner.go:130] >       "repoTags": [
	I0821 11:25:06.155906 2804799 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.4"
	I0821 11:25:06.155911 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.155916 2804799 command_runner.go:130] >       "repoDigests": [
	I0821 11:25:06.155925 2804799 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d",
	I0821 11:25:06.155934 2804799 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f65711310c4a5a305faecd8630aeee145cda14bee3a018967c02a1495170e815"
	I0821 11:25:06.155938 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.155943 2804799 command_runner.go:130] >       "size": "116270032",
	I0821 11:25:06.155948 2804799 command_runner.go:130] >       "uid": {
	I0821 11:25:06.155953 2804799 command_runner.go:130] >         "value": "0"
	I0821 11:25:06.155958 2804799 command_runner.go:130] >       },
	I0821 11:25:06.155963 2804799 command_runner.go:130] >       "username": "",
	I0821 11:25:06.155968 2804799 command_runner.go:130] >       "spec": null,
	I0821 11:25:06.155973 2804799 command_runner.go:130] >       "pinned": false
	I0821 11:25:06.155977 2804799 command_runner.go:130] >     },
	I0821 11:25:06.155981 2804799 command_runner.go:130] >     {
	I0821 11:25:06.155989 2804799 command_runner.go:130] >       "id": "389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2",
	I0821 11:25:06.155994 2804799 command_runner.go:130] >       "repoTags": [
	I0821 11:25:06.156001 2804799 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.4"
	I0821 11:25:06.156005 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.156010 2804799 command_runner.go:130] >       "repoDigests": [
	I0821 11:25:06.156020 2804799 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265",
	I0821 11:25:06.156030 2804799 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:955b498eda0646d58e6d15e1156da8ac731dedf1a9a47b5fbccce0d5e29fd3fd"
	I0821 11:25:06.156034 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.156042 2804799 command_runner.go:130] >       "size": "108667702",
	I0821 11:25:06.156046 2804799 command_runner.go:130] >       "uid": {
	I0821 11:25:06.156051 2804799 command_runner.go:130] >         "value": "0"
	I0821 11:25:06.156055 2804799 command_runner.go:130] >       },
	I0821 11:25:06.156060 2804799 command_runner.go:130] >       "username": "",
	I0821 11:25:06.156064 2804799 command_runner.go:130] >       "spec": null,
	I0821 11:25:06.156069 2804799 command_runner.go:130] >       "pinned": false
	I0821 11:25:06.156073 2804799 command_runner.go:130] >     },
	I0821 11:25:06.156078 2804799 command_runner.go:130] >     {
	I0821 11:25:06.156085 2804799 command_runner.go:130] >       "id": "532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317",
	I0821 11:25:06.156090 2804799 command_runner.go:130] >       "repoTags": [
	I0821 11:25:06.156096 2804799 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.4"
	I0821 11:25:06.156100 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.156105 2804799 command_runner.go:130] >       "repoDigests": [
	I0821 11:25:06.156114 2804799 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf",
	I0821 11:25:06.156123 2804799 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:f22b84e066d9bb46451754c220ae6f85bfaf4b661636af4bcc22c221f9b8ccca"
	I0821 11:25:06.156127 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.156133 2804799 command_runner.go:130] >       "size": "68099991",
	I0821 11:25:06.156138 2804799 command_runner.go:130] >       "uid": null,
	I0821 11:25:06.156143 2804799 command_runner.go:130] >       "username": "",
	I0821 11:25:06.156147 2804799 command_runner.go:130] >       "spec": null,
	I0821 11:25:06.156153 2804799 command_runner.go:130] >       "pinned": false
	I0821 11:25:06.156157 2804799 command_runner.go:130] >     },
	I0821 11:25:06.156162 2804799 command_runner.go:130] >     {
	I0821 11:25:06.156169 2804799 command_runner.go:130] >       "id": "6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085",
	I0821 11:25:06.156174 2804799 command_runner.go:130] >       "repoTags": [
	I0821 11:25:06.156180 2804799 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.4"
	I0821 11:25:06.156184 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.156189 2804799 command_runner.go:130] >       "repoDigests": [
	I0821 11:25:06.156204 2804799 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:516cd341872a8d3c967df9a69eeff664651efbb9df438f8dce6bf3ab430f26f8",
	I0821 11:25:06.156214 2804799 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af"
	I0821 11:25:06.156218 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.156224 2804799 command_runner.go:130] >       "size": "57615158",
	I0821 11:25:06.156228 2804799 command_runner.go:130] >       "uid": {
	I0821 11:25:06.156233 2804799 command_runner.go:130] >         "value": "0"
	I0821 11:25:06.156237 2804799 command_runner.go:130] >       },
	I0821 11:25:06.156242 2804799 command_runner.go:130] >       "username": "",
	I0821 11:25:06.156247 2804799 command_runner.go:130] >       "spec": null,
	I0821 11:25:06.156252 2804799 command_runner.go:130] >       "pinned": false
	I0821 11:25:06.156256 2804799 command_runner.go:130] >     },
	I0821 11:25:06.156260 2804799 command_runner.go:130] >     {
	I0821 11:25:06.156267 2804799 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0821 11:25:06.156272 2804799 command_runner.go:130] >       "repoTags": [
	I0821 11:25:06.156278 2804799 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0821 11:25:06.156282 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.156287 2804799 command_runner.go:130] >       "repoDigests": [
	I0821 11:25:06.156295 2804799 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0821 11:25:06.156306 2804799 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0821 11:25:06.156310 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.156315 2804799 command_runner.go:130] >       "size": "520014",
	I0821 11:25:06.156320 2804799 command_runner.go:130] >       "uid": {
	I0821 11:25:06.156325 2804799 command_runner.go:130] >         "value": "65535"
	I0821 11:25:06.156329 2804799 command_runner.go:130] >       },
	I0821 11:25:06.156334 2804799 command_runner.go:130] >       "username": "",
	I0821 11:25:06.156339 2804799 command_runner.go:130] >       "spec": null,
	I0821 11:25:06.156344 2804799 command_runner.go:130] >       "pinned": false
	I0821 11:25:06.156348 2804799 command_runner.go:130] >     }
	I0821 11:25:06.156352 2804799 command_runner.go:130] >   ]
	I0821 11:25:06.156356 2804799 command_runner.go:130] > }
	I0821 11:25:06.156555 2804799 crio.go:496] all images are preloaded for cri-o runtime.
	I0821 11:25:06.156562 2804799 crio.go:415] Images already preloaded, skipping extraction
	I0821 11:25:06.156620 2804799 ssh_runner.go:195] Run: sudo crictl images --output json
	I0821 11:25:06.197670 2804799 command_runner.go:130] > {
	I0821 11:25:06.197736 2804799 command_runner.go:130] >   "images": [
	I0821 11:25:06.197755 2804799 command_runner.go:130] >     {
	I0821 11:25:06.197777 2804799 command_runner.go:130] >       "id": "b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79",
	I0821 11:25:06.197802 2804799 command_runner.go:130] >       "repoTags": [
	I0821 11:25:06.197831 2804799 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0821 11:25:06.197853 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.197890 2804799 command_runner.go:130] >       "repoDigests": [
	I0821 11:25:06.197918 2804799 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f",
	I0821 11:25:06.197949 2804799 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"
	I0821 11:25:06.197968 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.197985 2804799 command_runner.go:130] >       "size": "60881430",
	I0821 11:25:06.198004 2804799 command_runner.go:130] >       "uid": null,
	I0821 11:25:06.198031 2804799 command_runner.go:130] >       "username": "",
	I0821 11:25:06.198057 2804799 command_runner.go:130] >       "spec": null,
	I0821 11:25:06.198078 2804799 command_runner.go:130] >       "pinned": false
	I0821 11:25:06.198096 2804799 command_runner.go:130] >     },
	I0821 11:25:06.198114 2804799 command_runner.go:130] >     {
	I0821 11:25:06.198145 2804799 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0821 11:25:06.198171 2804799 command_runner.go:130] >       "repoTags": [
	I0821 11:25:06.198193 2804799 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0821 11:25:06.198211 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.198231 2804799 command_runner.go:130] >       "repoDigests": [
	I0821 11:25:06.198263 2804799 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0821 11:25:06.198313 2804799 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0821 11:25:06.198338 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.198359 2804799 command_runner.go:130] >       "size": "29037500",
	I0821 11:25:06.198388 2804799 command_runner.go:130] >       "uid": null,
	I0821 11:25:06.198411 2804799 command_runner.go:130] >       "username": "",
	I0821 11:25:06.198431 2804799 command_runner.go:130] >       "spec": null,
	I0821 11:25:06.198452 2804799 command_runner.go:130] >       "pinned": false
	I0821 11:25:06.198470 2804799 command_runner.go:130] >     },
	I0821 11:25:06.198496 2804799 command_runner.go:130] >     {
	I0821 11:25:06.198524 2804799 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0821 11:25:06.198543 2804799 command_runner.go:130] >       "repoTags": [
	I0821 11:25:06.198564 2804799 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0821 11:25:06.198581 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.198608 2804799 command_runner.go:130] >       "repoDigests": [
	I0821 11:25:06.198636 2804799 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0821 11:25:06.198660 2804799 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0821 11:25:06.198678 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.198706 2804799 command_runner.go:130] >       "size": "51393451",
	I0821 11:25:06.198727 2804799 command_runner.go:130] >       "uid": null,
	I0821 11:25:06.198750 2804799 command_runner.go:130] >       "username": "",
	I0821 11:25:06.198769 2804799 command_runner.go:130] >       "spec": null,
	I0821 11:25:06.198790 2804799 command_runner.go:130] >       "pinned": false
	I0821 11:25:06.198820 2804799 command_runner.go:130] >     },
	I0821 11:25:06.198837 2804799 command_runner.go:130] >     {
	I0821 11:25:06.198859 2804799 command_runner.go:130] >       "id": "24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737",
	I0821 11:25:06.198879 2804799 command_runner.go:130] >       "repoTags": [
	I0821 11:25:06.198906 2804799 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.7-0"
	I0821 11:25:06.198929 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.198951 2804799 command_runner.go:130] >       "repoDigests": [
	I0821 11:25:06.198975 2804799 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd",
	I0821 11:25:06.199008 2804799 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"
	I0821 11:25:06.199037 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.199056 2804799 command_runner.go:130] >       "size": "182283991",
	I0821 11:25:06.199074 2804799 command_runner.go:130] >       "uid": {
	I0821 11:25:06.199105 2804799 command_runner.go:130] >         "value": "0"
	I0821 11:25:06.199125 2804799 command_runner.go:130] >       },
	I0821 11:25:06.199143 2804799 command_runner.go:130] >       "username": "",
	I0821 11:25:06.199163 2804799 command_runner.go:130] >       "spec": null,
	I0821 11:25:06.199181 2804799 command_runner.go:130] >       "pinned": false
	I0821 11:25:06.199208 2804799 command_runner.go:130] >     },
	I0821 11:25:06.199231 2804799 command_runner.go:130] >     {
	I0821 11:25:06.199253 2804799 command_runner.go:130] >       "id": "64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388",
	I0821 11:25:06.199272 2804799 command_runner.go:130] >       "repoTags": [
	I0821 11:25:06.199292 2804799 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.27.4"
	I0821 11:25:06.199318 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.199341 2804799 command_runner.go:130] >       "repoDigests": [
	I0821 11:25:06.199365 2804799 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d",
	I0821 11:25:06.199387 2804799 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f65711310c4a5a305faecd8630aeee145cda14bee3a018967c02a1495170e815"
	I0821 11:25:06.199416 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.199437 2804799 command_runner.go:130] >       "size": "116270032",
	I0821 11:25:06.199456 2804799 command_runner.go:130] >       "uid": {
	I0821 11:25:06.199474 2804799 command_runner.go:130] >         "value": "0"
	I0821 11:25:06.199493 2804799 command_runner.go:130] >       },
	I0821 11:25:06.199521 2804799 command_runner.go:130] >       "username": "",
	I0821 11:25:06.199547 2804799 command_runner.go:130] >       "spec": null,
	I0821 11:25:06.199572 2804799 command_runner.go:130] >       "pinned": false
	I0821 11:25:06.199590 2804799 command_runner.go:130] >     },
	I0821 11:25:06.199647 2804799 command_runner.go:130] >     {
	I0821 11:25:06.199764 2804799 command_runner.go:130] >       "id": "389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2",
	I0821 11:25:06.199790 2804799 command_runner.go:130] >       "repoTags": [
	I0821 11:25:06.199816 2804799 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.27.4"
	I0821 11:25:06.199835 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.199866 2804799 command_runner.go:130] >       "repoDigests": [
	I0821 11:25:06.199892 2804799 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265",
	I0821 11:25:06.199916 2804799 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:955b498eda0646d58e6d15e1156da8ac731dedf1a9a47b5fbccce0d5e29fd3fd"
	I0821 11:25:06.199935 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.199965 2804799 command_runner.go:130] >       "size": "108667702",
	I0821 11:25:06.199986 2804799 command_runner.go:130] >       "uid": {
	I0821 11:25:06.200005 2804799 command_runner.go:130] >         "value": "0"
	I0821 11:25:06.200022 2804799 command_runner.go:130] >       },
	I0821 11:25:06.200040 2804799 command_runner.go:130] >       "username": "",
	I0821 11:25:06.200069 2804799 command_runner.go:130] >       "spec": null,
	I0821 11:25:06.200094 2804799 command_runner.go:130] >       "pinned": false
	I0821 11:25:06.200117 2804799 command_runner.go:130] >     },
	I0821 11:25:06.200136 2804799 command_runner.go:130] >     {
	I0821 11:25:06.200168 2804799 command_runner.go:130] >       "id": "532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317",
	I0821 11:25:06.200189 2804799 command_runner.go:130] >       "repoTags": [
	I0821 11:25:06.200209 2804799 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.27.4"
	I0821 11:25:06.200228 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.200246 2804799 command_runner.go:130] >       "repoDigests": [
	I0821 11:25:06.200277 2804799 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf",
	I0821 11:25:06.200307 2804799 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:f22b84e066d9bb46451754c220ae6f85bfaf4b661636af4bcc22c221f9b8ccca"
	I0821 11:25:06.200326 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.200344 2804799 command_runner.go:130] >       "size": "68099991",
	I0821 11:25:06.200362 2804799 command_runner.go:130] >       "uid": null,
	I0821 11:25:06.200389 2804799 command_runner.go:130] >       "username": "",
	I0821 11:25:06.200412 2804799 command_runner.go:130] >       "spec": null,
	I0821 11:25:06.200431 2804799 command_runner.go:130] >       "pinned": false
	I0821 11:25:06.200449 2804799 command_runner.go:130] >     },
	I0821 11:25:06.200467 2804799 command_runner.go:130] >     {
	I0821 11:25:06.200498 2804799 command_runner.go:130] >       "id": "6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085",
	I0821 11:25:06.200529 2804799 command_runner.go:130] >       "repoTags": [
	I0821 11:25:06.200549 2804799 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.27.4"
	I0821 11:25:06.200567 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.200586 2804799 command_runner.go:130] >       "repoDigests": [
	I0821 11:25:06.200763 2804799 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:516cd341872a8d3c967df9a69eeff664651efbb9df438f8dce6bf3ab430f26f8",
	I0821 11:25:06.200799 2804799 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af"
	I0821 11:25:06.200819 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.200838 2804799 command_runner.go:130] >       "size": "57615158",
	I0821 11:25:06.200856 2804799 command_runner.go:130] >       "uid": {
	I0821 11:25:06.200889 2804799 command_runner.go:130] >         "value": "0"
	I0821 11:25:06.200906 2804799 command_runner.go:130] >       },
	I0821 11:25:06.200924 2804799 command_runner.go:130] >       "username": "",
	I0821 11:25:06.200946 2804799 command_runner.go:130] >       "spec": null,
	I0821 11:25:06.200974 2804799 command_runner.go:130] >       "pinned": false
	I0821 11:25:06.200998 2804799 command_runner.go:130] >     },
	I0821 11:25:06.201016 2804799 command_runner.go:130] >     {
	I0821 11:25:06.201037 2804799 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0821 11:25:06.201056 2804799 command_runner.go:130] >       "repoTags": [
	I0821 11:25:06.201092 2804799 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0821 11:25:06.201114 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.201132 2804799 command_runner.go:130] >       "repoDigests": [
	I0821 11:25:06.201155 2804799 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0821 11:25:06.201176 2804799 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0821 11:25:06.201207 2804799 command_runner.go:130] >       ],
	I0821 11:25:06.201225 2804799 command_runner.go:130] >       "size": "520014",
	I0821 11:25:06.201243 2804799 command_runner.go:130] >       "uid": {
	I0821 11:25:06.201262 2804799 command_runner.go:130] >         "value": "65535"
	I0821 11:25:06.201289 2804799 command_runner.go:130] >       },
	I0821 11:25:06.201314 2804799 command_runner.go:130] >       "username": "",
	I0821 11:25:06.201333 2804799 command_runner.go:130] >       "spec": null,
	I0821 11:25:06.201352 2804799 command_runner.go:130] >       "pinned": false
	I0821 11:25:06.201369 2804799 command_runner.go:130] >     }
	I0821 11:25:06.201397 2804799 command_runner.go:130] >   ]
	I0821 11:25:06.201421 2804799 command_runner.go:130] > }
	I0821 11:25:06.204142 2804799 crio.go:496] all images are preloaded for cri-o runtime.
	I0821 11:25:06.204160 2804799 cache_images.go:84] Images are preloaded, skipping loading
	I0821 11:25:06.204229 2804799 ssh_runner.go:195] Run: crio config
	I0821 11:25:06.258755 2804799 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0821 11:25:06.258783 2804799 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0821 11:25:06.258791 2804799 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0821 11:25:06.258796 2804799 command_runner.go:130] > #
	I0821 11:25:06.258809 2804799 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0821 11:25:06.258817 2804799 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0821 11:25:06.258825 2804799 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0821 11:25:06.258839 2804799 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0821 11:25:06.258853 2804799 command_runner.go:130] > # reload'.
	I0821 11:25:06.258861 2804799 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0821 11:25:06.258869 2804799 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0821 11:25:06.258880 2804799 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0821 11:25:06.258887 2804799 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0821 11:25:06.258894 2804799 command_runner.go:130] > [crio]
	I0821 11:25:06.258902 2804799 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0821 11:25:06.258908 2804799 command_runner.go:130] > # containers images, in this directory.
	I0821 11:25:06.258924 2804799 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0821 11:25:06.258942 2804799 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0821 11:25:06.258951 2804799 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0821 11:25:06.258959 2804799 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0821 11:25:06.258968 2804799 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0821 11:25:06.258975 2804799 command_runner.go:130] > # storage_driver = "vfs"
	I0821 11:25:06.258984 2804799 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0821 11:25:06.258999 2804799 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0821 11:25:06.259004 2804799 command_runner.go:130] > # storage_option = [
	I0821 11:25:06.259008 2804799 command_runner.go:130] > # ]
	I0821 11:25:06.259017 2804799 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0821 11:25:06.259027 2804799 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0821 11:25:06.259033 2804799 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0821 11:25:06.259039 2804799 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0821 11:25:06.259049 2804799 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0821 11:25:06.259055 2804799 command_runner.go:130] > # always happen on a node reboot
	I0821 11:25:06.259064 2804799 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0821 11:25:06.259072 2804799 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0821 11:25:06.259082 2804799 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0821 11:25:06.259093 2804799 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0821 11:25:06.259103 2804799 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0821 11:25:06.259112 2804799 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0821 11:25:06.259124 2804799 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0821 11:25:06.259129 2804799 command_runner.go:130] > # internal_wipe = true
	I0821 11:25:06.259135 2804799 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0821 11:25:06.259143 2804799 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0821 11:25:06.259154 2804799 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0821 11:25:06.259161 2804799 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0821 11:25:06.259172 2804799 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0821 11:25:06.259177 2804799 command_runner.go:130] > [crio.api]
	I0821 11:25:06.259185 2804799 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0821 11:25:06.259193 2804799 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0821 11:25:06.259199 2804799 command_runner.go:130] > # IP address on which the stream server will listen.
	I0821 11:25:06.259205 2804799 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0821 11:25:06.259217 2804799 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0821 11:25:06.259223 2804799 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0821 11:25:06.259232 2804799 command_runner.go:130] > # stream_port = "0"
	I0821 11:25:06.259239 2804799 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0821 11:25:06.259246 2804799 command_runner.go:130] > # stream_enable_tls = false
	I0821 11:25:06.259254 2804799 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0821 11:25:06.259259 2804799 command_runner.go:130] > # stream_idle_timeout = ""
	I0821 11:25:06.259268 2804799 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0821 11:25:06.259276 2804799 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0821 11:25:06.259283 2804799 command_runner.go:130] > # minutes.
	I0821 11:25:06.259288 2804799 command_runner.go:130] > # stream_tls_cert = ""
	I0821 11:25:06.259295 2804799 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0821 11:25:06.259302 2804799 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0821 11:25:06.259309 2804799 command_runner.go:130] > # stream_tls_key = ""
	I0821 11:25:06.259319 2804799 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0821 11:25:06.259333 2804799 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0821 11:25:06.259339 2804799 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0821 11:25:06.259346 2804799 command_runner.go:130] > # stream_tls_ca = ""
	I0821 11:25:06.259355 2804799 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0821 11:25:06.259363 2804799 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0821 11:25:06.259375 2804799 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0821 11:25:06.259381 2804799 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0821 11:25:06.259424 2804799 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0821 11:25:06.259434 2804799 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0821 11:25:06.259439 2804799 command_runner.go:130] > [crio.runtime]
	I0821 11:25:06.259446 2804799 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0821 11:25:06.259452 2804799 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0821 11:25:06.259457 2804799 command_runner.go:130] > # "nofile=1024:2048"
	I0821 11:25:06.259464 2804799 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0821 11:25:06.259469 2804799 command_runner.go:130] > # default_ulimits = [
	I0821 11:25:06.259473 2804799 command_runner.go:130] > # ]
	I0821 11:25:06.259480 2804799 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0821 11:25:06.259487 2804799 command_runner.go:130] > # no_pivot = false
	I0821 11:25:06.259496 2804799 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0821 11:25:06.259504 2804799 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0821 11:25:06.259513 2804799 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0821 11:25:06.259520 2804799 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0821 11:25:06.259526 2804799 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0821 11:25:06.259539 2804799 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0821 11:25:06.259548 2804799 command_runner.go:130] > # conmon = ""
	I0821 11:25:06.259553 2804799 command_runner.go:130] > # Cgroup setting for conmon
	I0821 11:25:06.259562 2804799 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0821 11:25:06.259569 2804799 command_runner.go:130] > conmon_cgroup = "pod"
	I0821 11:25:06.259578 2804799 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0821 11:25:06.259585 2804799 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0821 11:25:06.259595 2804799 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0821 11:25:06.259599 2804799 command_runner.go:130] > # conmon_env = [
	I0821 11:25:06.259611 2804799 command_runner.go:130] > # ]
	I0821 11:25:06.259617 2804799 command_runner.go:130] > # Additional environment variables to set for all the
	I0821 11:25:06.259630 2804799 command_runner.go:130] > # containers. These are overridden if set in the
	I0821 11:25:06.259637 2804799 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0821 11:25:06.259641 2804799 command_runner.go:130] > # default_env = [
	I0821 11:25:06.259645 2804799 command_runner.go:130] > # ]
	I0821 11:25:06.259652 2804799 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0821 11:25:06.259659 2804799 command_runner.go:130] > # selinux = false
	I0821 11:25:06.259667 2804799 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0821 11:25:06.259679 2804799 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0821 11:25:06.259686 2804799 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0821 11:25:06.259694 2804799 command_runner.go:130] > # seccomp_profile = ""
	I0821 11:25:06.259700 2804799 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0821 11:25:06.259709 2804799 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0821 11:25:06.259717 2804799 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0821 11:25:06.259725 2804799 command_runner.go:130] > # which might increase security.
	I0821 11:25:06.259731 2804799 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0821 11:25:06.259740 2804799 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0821 11:25:06.259750 2804799 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0821 11:25:06.259758 2804799 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0821 11:25:06.259766 2804799 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0821 11:25:06.259775 2804799 command_runner.go:130] > # This option supports live configuration reload.
	I0821 11:25:06.259780 2804799 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0821 11:25:06.259790 2804799 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0821 11:25:06.259798 2804799 command_runner.go:130] > # the cgroup blockio controller.
	I0821 11:25:06.259803 2804799 command_runner.go:130] > # blockio_config_file = ""
	I0821 11:25:06.259810 2804799 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0821 11:25:06.259818 2804799 command_runner.go:130] > # irqbalance daemon.
	I0821 11:25:06.259826 2804799 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0821 11:25:06.259835 2804799 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0821 11:25:06.259844 2804799 command_runner.go:130] > # This option supports live configuration reload.
	I0821 11:25:06.259848 2804799 command_runner.go:130] > # rdt_config_file = ""
	I0821 11:25:06.259855 2804799 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0821 11:25:06.259862 2804799 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0821 11:25:06.259870 2804799 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0821 11:25:06.259877 2804799 command_runner.go:130] > # separate_pull_cgroup = ""
	I0821 11:25:06.259884 2804799 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0821 11:25:06.259892 2804799 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0821 11:25:06.259897 2804799 command_runner.go:130] > # will be added.
	I0821 11:25:06.259901 2804799 command_runner.go:130] > # default_capabilities = [
	I0821 11:25:06.261246 2804799 command_runner.go:130] > # 	"CHOWN",
	I0821 11:25:06.261262 2804799 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0821 11:25:06.261267 2804799 command_runner.go:130] > # 	"FSETID",
	I0821 11:25:06.261272 2804799 command_runner.go:130] > # 	"FOWNER",
	I0821 11:25:06.261276 2804799 command_runner.go:130] > # 	"SETGID",
	I0821 11:25:06.261282 2804799 command_runner.go:130] > # 	"SETUID",
	I0821 11:25:06.261287 2804799 command_runner.go:130] > # 	"SETPCAP",
	I0821 11:25:06.261292 2804799 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0821 11:25:06.261296 2804799 command_runner.go:130] > # 	"KILL",
	I0821 11:25:06.261300 2804799 command_runner.go:130] > # ]
	I0821 11:25:06.261309 2804799 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0821 11:25:06.261325 2804799 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0821 11:25:06.261332 2804799 command_runner.go:130] > # add_inheritable_capabilities = true
	I0821 11:25:06.261345 2804799 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0821 11:25:06.261353 2804799 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0821 11:25:06.261358 2804799 command_runner.go:130] > # default_sysctls = [
	I0821 11:25:06.261363 2804799 command_runner.go:130] > # ]
	I0821 11:25:06.261369 2804799 command_runner.go:130] > # List of devices on the host that a
	I0821 11:25:06.261379 2804799 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0821 11:25:06.261388 2804799 command_runner.go:130] > # allowed_devices = [
	I0821 11:25:06.261394 2804799 command_runner.go:130] > # 	"/dev/fuse",
	I0821 11:25:06.261398 2804799 command_runner.go:130] > # ]
	I0821 11:25:06.261404 2804799 command_runner.go:130] > # List of additional devices. specified as
	I0821 11:25:06.261422 2804799 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0821 11:25:06.261432 2804799 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0821 11:25:06.261440 2804799 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0821 11:25:06.261451 2804799 command_runner.go:130] > # additional_devices = [
	I0821 11:25:06.261455 2804799 command_runner.go:130] > # ]
	I0821 11:25:06.261464 2804799 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0821 11:25:06.261469 2804799 command_runner.go:130] > # cdi_spec_dirs = [
	I0821 11:25:06.261476 2804799 command_runner.go:130] > # 	"/etc/cdi",
	I0821 11:25:06.261481 2804799 command_runner.go:130] > # 	"/var/run/cdi",
	I0821 11:25:06.261485 2804799 command_runner.go:130] > # ]
	I0821 11:25:06.261492 2804799 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0821 11:25:06.261503 2804799 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0821 11:25:06.261508 2804799 command_runner.go:130] > # Defaults to false.
	I0821 11:25:06.261516 2804799 command_runner.go:130] > # device_ownership_from_security_context = false
	I0821 11:25:06.261524 2804799 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0821 11:25:06.261532 2804799 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0821 11:25:06.261537 2804799 command_runner.go:130] > # hooks_dir = [
	I0821 11:25:06.261542 2804799 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0821 11:25:06.261547 2804799 command_runner.go:130] > # ]
	I0821 11:25:06.261557 2804799 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0821 11:25:06.261574 2804799 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0821 11:25:06.261580 2804799 command_runner.go:130] > # its default mounts from the following two files:
	I0821 11:25:06.261593 2804799 command_runner.go:130] > #
	I0821 11:25:06.261600 2804799 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0821 11:25:06.261608 2804799 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0821 11:25:06.261620 2804799 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0821 11:25:06.261624 2804799 command_runner.go:130] > #
	I0821 11:25:06.261632 2804799 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0821 11:25:06.261649 2804799 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0821 11:25:06.261656 2804799 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0821 11:25:06.261668 2804799 command_runner.go:130] > #      only add mounts it finds in this file.
	I0821 11:25:06.261676 2804799 command_runner.go:130] > #
	I0821 11:25:06.261681 2804799 command_runner.go:130] > # default_mounts_file = ""
	I0821 11:25:06.261688 2804799 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0821 11:25:06.261696 2804799 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0821 11:25:06.261702 2804799 command_runner.go:130] > # pids_limit = 0
	I0821 11:25:06.261712 2804799 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0821 11:25:06.261722 2804799 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0821 11:25:06.261730 2804799 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0821 11:25:06.261743 2804799 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0821 11:25:06.261748 2804799 command_runner.go:130] > # log_size_max = -1
	I0821 11:25:06.261759 2804799 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0821 11:25:06.261766 2804799 command_runner.go:130] > # log_to_journald = false
	I0821 11:25:06.261773 2804799 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0821 11:25:06.261779 2804799 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0821 11:25:06.261785 2804799 command_runner.go:130] > # Path to directory for container attach sockets.
	I0821 11:25:06.261794 2804799 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0821 11:25:06.261800 2804799 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0821 11:25:06.261805 2804799 command_runner.go:130] > # bind_mount_prefix = ""
	I0821 11:25:06.261812 2804799 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0821 11:25:06.261820 2804799 command_runner.go:130] > # read_only = false
	I0821 11:25:06.261827 2804799 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0821 11:25:06.261837 2804799 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0821 11:25:06.261842 2804799 command_runner.go:130] > # live configuration reload.
	I0821 11:25:06.261848 2804799 command_runner.go:130] > # log_level = "info"
	I0821 11:25:06.261856 2804799 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0821 11:25:06.261862 2804799 command_runner.go:130] > # This option supports live configuration reload.
	I0821 11:25:06.261869 2804799 command_runner.go:130] > # log_filter = ""
	I0821 11:25:06.261890 2804799 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0821 11:25:06.261898 2804799 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0821 11:25:06.261902 2804799 command_runner.go:130] > # separated by comma.
	I0821 11:25:06.261907 2804799 command_runner.go:130] > # uid_mappings = ""
	I0821 11:25:06.261914 2804799 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0821 11:25:06.261924 2804799 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0821 11:25:06.261929 2804799 command_runner.go:130] > # separated by comma.
	I0821 11:25:06.261933 2804799 command_runner.go:130] > # gid_mappings = ""
	I0821 11:25:06.261969 2804799 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0821 11:25:06.261980 2804799 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0821 11:25:06.261987 2804799 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0821 11:25:06.261992 2804799 command_runner.go:130] > # minimum_mappable_uid = -1
	I0821 11:25:06.261999 2804799 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0821 11:25:06.262007 2804799 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0821 11:25:06.262015 2804799 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0821 11:25:06.262020 2804799 command_runner.go:130] > # minimum_mappable_gid = -1
	I0821 11:25:06.262028 2804799 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0821 11:25:06.262036 2804799 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0821 11:25:06.262043 2804799 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0821 11:25:06.262050 2804799 command_runner.go:130] > # ctr_stop_timeout = 30
	I0821 11:25:06.262058 2804799 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0821 11:25:06.262067 2804799 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0821 11:25:06.262076 2804799 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0821 11:25:06.262085 2804799 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0821 11:25:06.262090 2804799 command_runner.go:130] > # drop_infra_ctr = true
	I0821 11:25:06.262100 2804799 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0821 11:25:06.262107 2804799 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0821 11:25:06.262116 2804799 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0821 11:25:06.262124 2804799 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0821 11:25:06.262132 2804799 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0821 11:25:06.262140 2804799 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0821 11:25:06.262146 2804799 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0821 11:25:06.262155 2804799 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0821 11:25:06.262163 2804799 command_runner.go:130] > # pinns_path = ""
	I0821 11:25:06.262171 2804799 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0821 11:25:06.262181 2804799 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0821 11:25:06.262188 2804799 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0821 11:25:06.262194 2804799 command_runner.go:130] > # default_runtime = "runc"
	I0821 11:25:06.262200 2804799 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0821 11:25:06.262212 2804799 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0821 11:25:06.262223 2804799 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0821 11:25:06.262231 2804799 command_runner.go:130] > # creation as a file is not desired either.
	I0821 11:25:06.262241 2804799 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0821 11:25:06.262250 2804799 command_runner.go:130] > # the hostname is being managed dynamically.
	I0821 11:25:06.262256 2804799 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0821 11:25:06.262261 2804799 command_runner.go:130] > # ]
	I0821 11:25:06.262268 2804799 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0821 11:25:06.262279 2804799 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0821 11:25:06.262289 2804799 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0821 11:25:06.262300 2804799 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0821 11:25:06.262304 2804799 command_runner.go:130] > #
	I0821 11:25:06.262310 2804799 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0821 11:25:06.262318 2804799 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0821 11:25:06.262323 2804799 command_runner.go:130] > #  runtime_type = "oci"
	I0821 11:25:06.262331 2804799 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0821 11:25:06.262337 2804799 command_runner.go:130] > #  privileged_without_host_devices = false
	I0821 11:25:06.262342 2804799 command_runner.go:130] > #  allowed_annotations = []
	I0821 11:25:06.262346 2804799 command_runner.go:130] > # Where:
	I0821 11:25:06.262353 2804799 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0821 11:25:06.262364 2804799 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0821 11:25:06.262374 2804799 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0821 11:25:06.262382 2804799 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0821 11:25:06.262389 2804799 command_runner.go:130] > #   in $PATH.
	I0821 11:25:06.262396 2804799 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0821 11:25:06.262404 2804799 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0821 11:25:06.262412 2804799 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0821 11:25:06.262417 2804799 command_runner.go:130] > #   state.
	I0821 11:25:06.262424 2804799 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0821 11:25:06.262435 2804799 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0821 11:25:06.262445 2804799 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0821 11:25:06.262451 2804799 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0821 11:25:06.262461 2804799 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0821 11:25:06.262469 2804799 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0821 11:25:06.262478 2804799 command_runner.go:130] > #   The currently recognized values are:
	I0821 11:25:06.262485 2804799 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0821 11:25:06.262496 2804799 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0821 11:25:06.262529 2804799 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0821 11:25:06.262540 2804799 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0821 11:25:06.262549 2804799 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0821 11:25:06.262556 2804799 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0821 11:25:06.262564 2804799 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0821 11:25:06.262572 2804799 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0821 11:25:06.262578 2804799 command_runner.go:130] > #   should be moved to the container's cgroup
	I0821 11:25:06.262583 2804799 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0821 11:25:06.262589 2804799 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0821 11:25:06.262598 2804799 command_runner.go:130] > runtime_type = "oci"
	I0821 11:25:06.262604 2804799 command_runner.go:130] > runtime_root = "/run/runc"
	I0821 11:25:06.262609 2804799 command_runner.go:130] > runtime_config_path = ""
	I0821 11:25:06.262616 2804799 command_runner.go:130] > monitor_path = ""
	I0821 11:25:06.262621 2804799 command_runner.go:130] > monitor_cgroup = ""
	I0821 11:25:06.262628 2804799 command_runner.go:130] > monitor_exec_cgroup = ""
	I0821 11:25:06.262643 2804799 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0821 11:25:06.262652 2804799 command_runner.go:130] > # running containers
	I0821 11:25:06.262657 2804799 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0821 11:25:06.262665 2804799 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0821 11:25:06.262675 2804799 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0821 11:25:06.262682 2804799 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0821 11:25:06.262691 2804799 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0821 11:25:06.262697 2804799 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0821 11:25:06.262702 2804799 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0821 11:25:06.262710 2804799 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0821 11:25:06.262716 2804799 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0821 11:25:06.262727 2804799 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0821 11:25:06.262735 2804799 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0821 11:25:06.262744 2804799 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0821 11:25:06.262752 2804799 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0821 11:25:06.262764 2804799 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0821 11:25:06.262773 2804799 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0821 11:25:06.262783 2804799 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0821 11:25:06.262795 2804799 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0821 11:25:06.262807 2804799 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0821 11:25:06.262814 2804799 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0821 11:25:06.262823 2804799 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0821 11:25:06.262828 2804799 command_runner.go:130] > # Example:
	I0821 11:25:06.262834 2804799 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0821 11:25:06.262843 2804799 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0821 11:25:06.262849 2804799 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0821 11:25:06.262855 2804799 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0821 11:25:06.262862 2804799 command_runner.go:130] > # cpuset = 0
	I0821 11:25:06.262867 2804799 command_runner.go:130] > # cpushares = "0-1"
	I0821 11:25:06.262871 2804799 command_runner.go:130] > # Where:
	I0821 11:25:06.262879 2804799 command_runner.go:130] > # The workload name is workload-type.
	I0821 11:25:06.262890 2804799 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0821 11:25:06.262897 2804799 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0821 11:25:06.262904 2804799 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0821 11:25:06.262913 2804799 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0821 11:25:06.262922 2804799 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0821 11:25:06.262926 2804799 command_runner.go:130] > # 
	I0821 11:25:06.262934 2804799 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0821 11:25:06.262941 2804799 command_runner.go:130] > #
	I0821 11:25:06.262949 2804799 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0821 11:25:06.262959 2804799 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0821 11:25:06.262967 2804799 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0821 11:25:06.262978 2804799 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0821 11:25:06.262985 2804799 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0821 11:25:06.262992 2804799 command_runner.go:130] > [crio.image]
	I0821 11:25:06.262999 2804799 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0821 11:25:06.263005 2804799 command_runner.go:130] > # default_transport = "docker://"
	I0821 11:25:06.263015 2804799 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0821 11:25:06.263022 2804799 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0821 11:25:06.263033 2804799 command_runner.go:130] > # global_auth_file = ""
	I0821 11:25:06.263039 2804799 command_runner.go:130] > # The image used to instantiate infra containers.
	I0821 11:25:06.263047 2804799 command_runner.go:130] > # This option supports live configuration reload.
	I0821 11:25:06.263053 2804799 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0821 11:25:06.263061 2804799 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0821 11:25:06.263069 2804799 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0821 11:25:06.263077 2804799 command_runner.go:130] > # This option supports live configuration reload.
	I0821 11:25:06.263103 2804799 command_runner.go:130] > # pause_image_auth_file = ""
	I0821 11:25:06.263113 2804799 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0821 11:25:06.263121 2804799 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0821 11:25:06.263128 2804799 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0821 11:25:06.263149 2804799 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0821 11:25:06.263155 2804799 command_runner.go:130] > # pause_command = "/pause"
	I0821 11:25:06.263162 2804799 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0821 11:25:06.263170 2804799 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0821 11:25:06.263182 2804799 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0821 11:25:06.263190 2804799 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0821 11:25:06.263196 2804799 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0821 11:25:06.263205 2804799 command_runner.go:130] > # signature_policy = ""
	I0821 11:25:06.263213 2804799 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0821 11:25:06.263224 2804799 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0821 11:25:06.263229 2804799 command_runner.go:130] > # changing them here.
	I0821 11:25:06.263234 2804799 command_runner.go:130] > # insecure_registries = [
	I0821 11:25:06.263238 2804799 command_runner.go:130] > # ]
	I0821 11:25:06.263246 2804799 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0821 11:25:06.263252 2804799 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0821 11:25:06.263261 2804799 command_runner.go:130] > # image_volumes = "mkdir"
	I0821 11:25:06.263268 2804799 command_runner.go:130] > # Temporary directory to use for storing big files
	I0821 11:25:06.263273 2804799 command_runner.go:130] > # big_files_temporary_dir = ""
	I0821 11:25:06.263283 2804799 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0821 11:25:06.263288 2804799 command_runner.go:130] > # CNI plugins.
	I0821 11:25:06.263294 2804799 command_runner.go:130] > [crio.network]
	I0821 11:25:06.263302 2804799 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0821 11:25:06.263311 2804799 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0821 11:25:06.263318 2804799 command_runner.go:130] > # cni_default_network = ""
	I0821 11:25:06.263325 2804799 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0821 11:25:06.263335 2804799 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0821 11:25:06.263345 2804799 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0821 11:25:06.263353 2804799 command_runner.go:130] > # plugin_dirs = [
	I0821 11:25:06.263359 2804799 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0821 11:25:06.263363 2804799 command_runner.go:130] > # ]
	I0821 11:25:06.263370 2804799 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0821 11:25:06.263378 2804799 command_runner.go:130] > [crio.metrics]
	I0821 11:25:06.263384 2804799 command_runner.go:130] > # Globally enable or disable metrics support.
	I0821 11:25:06.263389 2804799 command_runner.go:130] > # enable_metrics = false
	I0821 11:25:06.263397 2804799 command_runner.go:130] > # Specify enabled metrics collectors.
	I0821 11:25:06.263403 2804799 command_runner.go:130] > # Per default all metrics are enabled.
	I0821 11:25:06.263410 2804799 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0821 11:25:06.263418 2804799 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0821 11:25:06.263427 2804799 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0821 11:25:06.263432 2804799 command_runner.go:130] > # metrics_collectors = [
	I0821 11:25:06.263436 2804799 command_runner.go:130] > # 	"operations",
	I0821 11:25:06.263444 2804799 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0821 11:25:06.263452 2804799 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0821 11:25:06.263459 2804799 command_runner.go:130] > # 	"operations_errors",
	I0821 11:25:06.263466 2804799 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0821 11:25:06.263471 2804799 command_runner.go:130] > # 	"image_pulls_by_name",
	I0821 11:25:06.263476 2804799 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0821 11:25:06.263481 2804799 command_runner.go:130] > # 	"image_pulls_failures",
	I0821 11:25:06.263487 2804799 command_runner.go:130] > # 	"image_pulls_successes",
	I0821 11:25:06.263492 2804799 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0821 11:25:06.263496 2804799 command_runner.go:130] > # 	"image_layer_reuse",
	I0821 11:25:06.263504 2804799 command_runner.go:130] > # 	"containers_oom_total",
	I0821 11:25:06.263509 2804799 command_runner.go:130] > # 	"containers_oom",
	I0821 11:25:06.263516 2804799 command_runner.go:130] > # 	"processes_defunct",
	I0821 11:25:06.263521 2804799 command_runner.go:130] > # 	"operations_total",
	I0821 11:25:06.263526 2804799 command_runner.go:130] > # 	"operations_latency_seconds",
	I0821 11:25:06.263534 2804799 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0821 11:25:06.263539 2804799 command_runner.go:130] > # 	"operations_errors_total",
	I0821 11:25:06.263545 2804799 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0821 11:25:06.263553 2804799 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0821 11:25:06.263560 2804799 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0821 11:25:06.263567 2804799 command_runner.go:130] > # 	"image_pulls_success_total",
	I0821 11:25:06.263573 2804799 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0821 11:25:06.263578 2804799 command_runner.go:130] > # 	"containers_oom_count_total",
	I0821 11:25:06.263584 2804799 command_runner.go:130] > # ]
	I0821 11:25:06.263591 2804799 command_runner.go:130] > # The port on which the metrics server will listen.
	I0821 11:25:06.263598 2804799 command_runner.go:130] > # metrics_port = 9090
	I0821 11:25:06.263604 2804799 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0821 11:25:06.263609 2804799 command_runner.go:130] > # metrics_socket = ""
	I0821 11:25:06.263617 2804799 command_runner.go:130] > # The certificate for the secure metrics server.
	I0821 11:25:06.263626 2804799 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0821 11:25:06.263635 2804799 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0821 11:25:06.263641 2804799 command_runner.go:130] > # certificate on any modification event.
	I0821 11:25:06.263646 2804799 command_runner.go:130] > # metrics_cert = ""
	I0821 11:25:06.263652 2804799 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0821 11:25:06.263658 2804799 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0821 11:25:06.263665 2804799 command_runner.go:130] > # metrics_key = ""
	I0821 11:25:06.263672 2804799 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0821 11:25:06.263679 2804799 command_runner.go:130] > [crio.tracing]
	I0821 11:25:06.263687 2804799 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0821 11:25:06.263693 2804799 command_runner.go:130] > # enable_tracing = false
	I0821 11:25:06.263703 2804799 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0821 11:25:06.263708 2804799 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0821 11:25:06.263717 2804799 command_runner.go:130] > # Number of samples to collect per million spans.
	I0821 11:25:06.263723 2804799 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0821 11:25:06.263730 2804799 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0821 11:25:06.263735 2804799 command_runner.go:130] > [crio.stats]
	I0821 11:25:06.263741 2804799 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0821 11:25:06.263748 2804799 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0821 11:25:06.263755 2804799 command_runner.go:130] > # stats_collection_period = 0
	I0821 11:25:06.265673 2804799 command_runner.go:130] ! time="2023-08-21 11:25:06.253023882Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0821 11:25:06.265694 2804799 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0821 11:25:06.266063 2804799 cni.go:84] Creating CNI manager for ""
	I0821 11:25:06.266082 2804799 cni.go:136] 1 nodes found, recommending kindnet
	I0821 11:25:06.266129 2804799 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0821 11:25:06.266156 2804799 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-994910 NodeName:multinode-994910 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0821 11:25:06.266342 2804799 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-994910"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0821 11:25:06.266427 2804799 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-994910 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:multinode-994910 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0821 11:25:06.266523 2804799 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0821 11:25:06.275980 2804799 command_runner.go:130] > kubeadm
	I0821 11:25:06.276000 2804799 command_runner.go:130] > kubectl
	I0821 11:25:06.276005 2804799 command_runner.go:130] > kubelet
	I0821 11:25:06.277095 2804799 binaries.go:44] Found k8s binaries, skipping transfer
	I0821 11:25:06.277169 2804799 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0821 11:25:06.288256 2804799 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0821 11:25:06.312701 2804799 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0821 11:25:06.336183 2804799 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0821 11:25:06.357041 2804799 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0821 11:25:06.361449 2804799 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 11:25:06.374506 2804799 certs.go:56] Setting up /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910 for IP: 192.168.58.2
	I0821 11:25:06.374537 2804799 certs.go:190] acquiring lock for shared ca certs: {Name:mkf22db11ef8c10db9220127fbe1c5ce3b246b6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:25:06.374669 2804799 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.key
	I0821 11:25:06.374716 2804799 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.key
	I0821 11:25:06.374762 2804799 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/client.key
	I0821 11:25:06.374776 2804799 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/client.crt with IP's: []
	I0821 11:25:06.675524 2804799 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/client.crt ...
	I0821 11:25:06.675556 2804799 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/client.crt: {Name:mk552b64301b9c66407a1863e1c03648b5743f7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:25:06.675753 2804799 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/client.key ...
	I0821 11:25:06.675765 2804799 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/client.key: {Name:mkbde190c39f12640a0a75ec181fdeb82984bdc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:25:06.675855 2804799 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/apiserver.key.cee25041
	I0821 11:25:06.675871 2804799 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0821 11:25:07.810124 2804799 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/apiserver.crt.cee25041 ...
	I0821 11:25:07.810160 2804799 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/apiserver.crt.cee25041: {Name:mk6506e68a3f137a8611a272a424579ab6ff3b38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:25:07.810358 2804799 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/apiserver.key.cee25041 ...
	I0821 11:25:07.810371 2804799 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/apiserver.key.cee25041: {Name:mkc86f4df0030d1ecfa189ce05f4c3a8f67f3805 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:25:07.810455 2804799 certs.go:337] copying /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/apiserver.crt
	I0821 11:25:07.810526 2804799 certs.go:341] copying /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/apiserver.key
	I0821 11:25:07.810584 2804799 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/proxy-client.key
	I0821 11:25:07.810600 2804799 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/proxy-client.crt with IP's: []
	I0821 11:25:08.242501 2804799 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/proxy-client.crt ...
	I0821 11:25:08.242535 2804799 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/proxy-client.crt: {Name:mk80c14079ad98565e1adfe3196e1804605c7373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:25:08.242726 2804799 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/proxy-client.key ...
	I0821 11:25:08.242737 2804799 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/proxy-client.key: {Name:mk9e9e8bc88e83c5174beeacc2c4c74afb795fd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:25:08.242850 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0821 11:25:08.242872 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0821 11:25:08.242885 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0821 11:25:08.242902 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0821 11:25:08.242913 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0821 11:25:08.242929 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0821 11:25:08.242943 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0821 11:25:08.242954 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0821 11:25:08.243012 2804799 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/2739930.pem (1338 bytes)
	W0821 11:25:08.243055 2804799 certs.go:433] ignoring /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/2739930_empty.pem, impossibly tiny 0 bytes
	I0821 11:25:08.243067 2804799 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca-key.pem (1679 bytes)
	I0821 11:25:08.243095 2804799 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem (1078 bytes)
	I0821 11:25:08.243124 2804799 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem (1123 bytes)
	I0821 11:25:08.243152 2804799 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem (1675 bytes)
	I0821 11:25:08.243199 2804799 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem (1708 bytes)
	I0821 11:25:08.243229 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem -> /usr/share/ca-certificates/27399302.pem
	I0821 11:25:08.243245 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0821 11:25:08.243255 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/2739930.pem -> /usr/share/ca-certificates/2739930.pem
	I0821 11:25:08.243915 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0821 11:25:08.274184 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0821 11:25:08.303799 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0821 11:25:08.333282 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0821 11:25:08.362896 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0821 11:25:08.392964 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0821 11:25:08.421411 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0821 11:25:08.450176 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0821 11:25:08.478195 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem --> /usr/share/ca-certificates/27399302.pem (1708 bytes)
	I0821 11:25:08.506111 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0821 11:25:08.533855 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/2739930.pem --> /usr/share/ca-certificates/2739930.pem (1338 bytes)
	I0821 11:25:08.561425 2804799 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0821 11:25:08.581950 2804799 ssh_runner.go:195] Run: openssl version
	I0821 11:25:08.588553 2804799 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0821 11:25:08.588636 2804799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27399302.pem && ln -fs /usr/share/ca-certificates/27399302.pem /etc/ssl/certs/27399302.pem"
	I0821 11:25:08.600136 2804799 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27399302.pem
	I0821 11:25:08.604724 2804799 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 21 11:09 /usr/share/ca-certificates/27399302.pem
	I0821 11:25:08.604750 2804799 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 21 11:09 /usr/share/ca-certificates/27399302.pem
	I0821 11:25:08.604805 2804799 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27399302.pem
	I0821 11:25:08.612857 2804799 command_runner.go:130] > 3ec20f2e
	I0821 11:25:08.613286 2804799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27399302.pem /etc/ssl/certs/3ec20f2e.0"
	I0821 11:25:08.625220 2804799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0821 11:25:08.636950 2804799 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0821 11:25:08.641485 2804799 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 21 11:03 /usr/share/ca-certificates/minikubeCA.pem
	I0821 11:25:08.641554 2804799 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 21 11:03 /usr/share/ca-certificates/minikubeCA.pem
	I0821 11:25:08.641637 2804799 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0821 11:25:08.649763 2804799 command_runner.go:130] > b5213941
	I0821 11:25:08.650168 2804799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0821 11:25:08.661567 2804799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2739930.pem && ln -fs /usr/share/ca-certificates/2739930.pem /etc/ssl/certs/2739930.pem"
	I0821 11:25:08.672873 2804799 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2739930.pem
	I0821 11:25:08.677548 2804799 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 21 11:09 /usr/share/ca-certificates/2739930.pem
	I0821 11:25:08.677575 2804799 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 21 11:09 /usr/share/ca-certificates/2739930.pem
	I0821 11:25:08.677646 2804799 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2739930.pem
	I0821 11:25:08.685739 2804799 command_runner.go:130] > 51391683
	I0821 11:25:08.686248 2804799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2739930.pem /etc/ssl/certs/51391683.0"
	I0821 11:25:08.697763 2804799 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0821 11:25:08.701935 2804799 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0821 11:25:08.701966 2804799 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0821 11:25:08.702037 2804799 kubeadm.go:404] StartCluster: {Name:multinode-994910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-994910 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:25:08.702128 2804799 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0821 11:25:08.702185 2804799 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0821 11:25:08.746174 2804799 cri.go:89] found id: ""
	I0821 11:25:08.746297 2804799 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0821 11:25:08.757038 2804799 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0821 11:25:08.757065 2804799 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0821 11:25:08.757074 2804799 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0821 11:25:08.757151 2804799 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0821 11:25:08.767982 2804799 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0821 11:25:08.768099 2804799 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0821 11:25:08.779148 2804799 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0821 11:25:08.779171 2804799 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0821 11:25:08.779180 2804799 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0821 11:25:08.779208 2804799 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0821 11:25:08.779239 2804799 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0821 11:25:08.779281 2804799 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0821 11:25:08.835640 2804799 kubeadm.go:322] [init] Using Kubernetes version: v1.27.4
	I0821 11:25:08.835670 2804799 command_runner.go:130] > [init] Using Kubernetes version: v1.27.4
	I0821 11:25:08.836065 2804799 kubeadm.go:322] [preflight] Running pre-flight checks
	I0821 11:25:08.836084 2804799 command_runner.go:130] > [preflight] Running pre-flight checks
	I0821 11:25:08.880323 2804799 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0821 11:25:08.880390 2804799 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0821 11:25:08.880471 2804799 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1041-aws
	I0821 11:25:08.880493 2804799 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1041-aws
	I0821 11:25:08.880538 2804799 kubeadm.go:322] OS: Linux
	I0821 11:25:08.880570 2804799 command_runner.go:130] > OS: Linux
	I0821 11:25:08.880633 2804799 kubeadm.go:322] CGROUPS_CPU: enabled
	I0821 11:25:08.880652 2804799 command_runner.go:130] > CGROUPS_CPU: enabled
	I0821 11:25:08.880720 2804799 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0821 11:25:08.880742 2804799 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0821 11:25:08.880801 2804799 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0821 11:25:08.880828 2804799 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0821 11:25:08.880891 2804799 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0821 11:25:08.880913 2804799 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0821 11:25:08.880984 2804799 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0821 11:25:08.881006 2804799 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0821 11:25:08.881080 2804799 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0821 11:25:08.881100 2804799 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0821 11:25:08.881169 2804799 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0821 11:25:08.881190 2804799 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0821 11:25:08.881256 2804799 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0821 11:25:08.881280 2804799 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0821 11:25:08.881342 2804799 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0821 11:25:08.881370 2804799 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0821 11:25:08.967367 2804799 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0821 11:25:08.967392 2804799 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0821 11:25:08.967481 2804799 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0821 11:25:08.967491 2804799 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0821 11:25:08.967577 2804799 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0821 11:25:08.967586 2804799 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0821 11:25:09.224299 2804799 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0821 11:25:09.228483 2804799 out.go:204]   - Generating certificates and keys ...
	I0821 11:25:09.224502 2804799 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0821 11:25:09.228654 2804799 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0821 11:25:09.228674 2804799 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0821 11:25:09.228764 2804799 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0821 11:25:09.228779 2804799 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0821 11:25:09.507089 2804799 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0821 11:25:09.507118 2804799 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0821 11:25:09.794918 2804799 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0821 11:25:09.794946 2804799 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0821 11:25:11.111653 2804799 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0821 11:25:11.111668 2804799 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0821 11:25:11.343083 2804799 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0821 11:25:11.343111 2804799 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0821 11:25:11.590951 2804799 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0821 11:25:11.590983 2804799 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0821 11:25:11.591335 2804799 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-994910] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0821 11:25:11.591364 2804799 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-994910] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0821 11:25:12.775610 2804799 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0821 11:25:12.775637 2804799 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0821 11:25:12.776025 2804799 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-994910] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0821 11:25:12.776037 2804799 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-994910] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0821 11:25:13.023276 2804799 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0821 11:25:13.023301 2804799 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0821 11:25:13.413386 2804799 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0821 11:25:13.413411 2804799 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0821 11:25:13.934132 2804799 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0821 11:25:13.934157 2804799 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0821 11:25:13.934434 2804799 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0821 11:25:13.934452 2804799 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0821 11:25:14.169426 2804799 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0821 11:25:14.169452 2804799 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0821 11:25:14.789540 2804799 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0821 11:25:14.789565 2804799 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0821 11:25:15.175607 2804799 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0821 11:25:15.175640 2804799 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0821 11:25:16.038220 2804799 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0821 11:25:16.038250 2804799 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0821 11:25:16.050369 2804799 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0821 11:25:16.050395 2804799 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0821 11:25:16.050492 2804799 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0821 11:25:16.050501 2804799 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0821 11:25:16.050545 2804799 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0821 11:25:16.050554 2804799 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0821 11:25:16.150355 2804799 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0821 11:25:16.150376 2804799 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0821 11:25:16.154044 2804799 out.go:204]   - Booting up control plane ...
	I0821 11:25:16.154159 2804799 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0821 11:25:16.154225 2804799 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0821 11:25:16.154311 2804799 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0821 11:25:16.154315 2804799 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0821 11:25:16.154962 2804799 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0821 11:25:16.154983 2804799 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0821 11:25:16.157859 2804799 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0821 11:25:16.157890 2804799 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0821 11:25:16.161235 2804799 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0821 11:25:16.161261 2804799 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0821 11:25:25.163759 2804799 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.002558 seconds
	I0821 11:25:25.163784 2804799 command_runner.go:130] > [apiclient] All control plane components are healthy after 9.002558 seconds
	I0821 11:25:25.163889 2804799 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0821 11:25:25.163895 2804799 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0821 11:25:25.177709 2804799 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0821 11:25:25.177734 2804799 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0821 11:25:25.703178 2804799 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0821 11:25:25.703206 2804799 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0821 11:25:25.703385 2804799 kubeadm.go:322] [mark-control-plane] Marking the node multinode-994910 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0821 11:25:25.703395 2804799 command_runner.go:130] > [mark-control-plane] Marking the node multinode-994910 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0821 11:25:26.216311 2804799 kubeadm.go:322] [bootstrap-token] Using token: 6wbzq9.kta1zggocxam5hot
	I0821 11:25:26.221015 2804799 out.go:204]   - Configuring RBAC rules ...
	I0821 11:25:26.216429 2804799 command_runner.go:130] > [bootstrap-token] Using token: 6wbzq9.kta1zggocxam5hot
	I0821 11:25:26.221221 2804799 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0821 11:25:26.221238 2804799 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0821 11:25:26.228300 2804799 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0821 11:25:26.228321 2804799 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0821 11:25:26.236517 2804799 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0821 11:25:26.236541 2804799 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0821 11:25:26.240662 2804799 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0821 11:25:26.240685 2804799 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0821 11:25:26.244722 2804799 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0821 11:25:26.244744 2804799 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0821 11:25:26.248650 2804799 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0821 11:25:26.248675 2804799 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0821 11:25:26.262966 2804799 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0821 11:25:26.262992 2804799 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0821 11:25:26.532740 2804799 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0821 11:25:26.532766 2804799 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0821 11:25:26.658667 2804799 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0821 11:25:26.658690 2804799 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0821 11:25:26.659559 2804799 kubeadm.go:322] 
	I0821 11:25:26.659628 2804799 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0821 11:25:26.659640 2804799 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0821 11:25:26.659645 2804799 kubeadm.go:322] 
	I0821 11:25:26.659721 2804799 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0821 11:25:26.659731 2804799 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0821 11:25:26.659737 2804799 kubeadm.go:322] 
	I0821 11:25:26.659761 2804799 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0821 11:25:26.659769 2804799 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0821 11:25:26.659824 2804799 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0821 11:25:26.659833 2804799 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0821 11:25:26.659880 2804799 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0821 11:25:26.659888 2804799 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0821 11:25:26.659892 2804799 kubeadm.go:322] 
	I0821 11:25:26.659949 2804799 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0821 11:25:26.659958 2804799 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0821 11:25:26.659962 2804799 kubeadm.go:322] 
	I0821 11:25:26.660011 2804799 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0821 11:25:26.660020 2804799 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0821 11:25:26.660024 2804799 kubeadm.go:322] 
	I0821 11:25:26.660073 2804799 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0821 11:25:26.660084 2804799 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0821 11:25:26.660155 2804799 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0821 11:25:26.660163 2804799 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0821 11:25:26.660227 2804799 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0821 11:25:26.660235 2804799 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0821 11:25:26.660240 2804799 kubeadm.go:322] 
	I0821 11:25:26.660319 2804799 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0821 11:25:26.660327 2804799 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0821 11:25:26.660399 2804799 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0821 11:25:26.660407 2804799 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0821 11:25:26.660412 2804799 kubeadm.go:322] 
	I0821 11:25:26.660491 2804799 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 6wbzq9.kta1zggocxam5hot \
	I0821 11:25:26.660499 2804799 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 6wbzq9.kta1zggocxam5hot \
	I0821 11:25:26.660596 2804799 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:53df1391c07b454a6b96f5fce415fe23bfbfcda331215b828a9e1234aa2104c1 \
	I0821 11:25:26.660604 2804799 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:53df1391c07b454a6b96f5fce415fe23bfbfcda331215b828a9e1234aa2104c1 \
	I0821 11:25:26.660624 2804799 kubeadm.go:322] 	--control-plane 
	I0821 11:25:26.660631 2804799 command_runner.go:130] > 	--control-plane 
	I0821 11:25:26.660636 2804799 kubeadm.go:322] 
	I0821 11:25:26.660720 2804799 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0821 11:25:26.660728 2804799 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0821 11:25:26.660733 2804799 kubeadm.go:322] 
	I0821 11:25:26.660810 2804799 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 6wbzq9.kta1zggocxam5hot \
	I0821 11:25:26.660818 2804799 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 6wbzq9.kta1zggocxam5hot \
	I0821 11:25:26.660914 2804799 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:53df1391c07b454a6b96f5fce415fe23bfbfcda331215b828a9e1234aa2104c1 
	I0821 11:25:26.660922 2804799 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:53df1391c07b454a6b96f5fce415fe23bfbfcda331215b828a9e1234aa2104c1 
	I0821 11:25:26.663442 2804799 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1041-aws\n", err: exit status 1
	I0821 11:25:26.663465 2804799 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1041-aws\n", err: exit status 1
	I0821 11:25:26.663599 2804799 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0821 11:25:26.663615 2804799 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0821 11:25:26.663640 2804799 cni.go:84] Creating CNI manager for ""
	I0821 11:25:26.663660 2804799 cni.go:136] 1 nodes found, recommending kindnet
	I0821 11:25:26.666387 2804799 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0821 11:25:26.668589 2804799 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0821 11:25:26.682867 2804799 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0821 11:25:26.682889 2804799 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0821 11:25:26.682897 2804799 command_runner.go:130] > Device: 36h/54d	Inode: 5713632     Links: 1
	I0821 11:25:26.682905 2804799 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0821 11:25:26.682930 2804799 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0821 11:25:26.682945 2804799 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0821 11:25:26.682951 2804799 command_runner.go:130] > Change: 2023-08-21 11:02:39.230246643 +0000
	I0821 11:25:26.682963 2804799 command_runner.go:130] >  Birth: 2023-08-21 11:02:39.186247522 +0000
	I0821 11:25:26.686408 2804799 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0821 11:25:26.686434 2804799 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0821 11:25:26.744792 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0821 11:25:27.601441 2804799 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0821 11:25:27.601463 2804799 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0821 11:25:27.601470 2804799 command_runner.go:130] > serviceaccount/kindnet created
	I0821 11:25:27.601476 2804799 command_runner.go:130] > daemonset.apps/kindnet created
	I0821 11:25:27.601504 2804799 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0821 11:25:27.601609 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:27.601627 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43 minikube.k8s.io/name=multinode-994910 minikube.k8s.io/updated_at=2023_08_21T11_25_27_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:27.746158 2804799 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0821 11:25:27.750455 2804799 command_runner.go:130] > -16
	I0821 11:25:27.750485 2804799 ops.go:34] apiserver oom_adj: -16
	I0821 11:25:27.750559 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:27.756507 2804799 command_runner.go:130] > node/multinode-994910 labeled
	I0821 11:25:27.885219 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:27.885314 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:27.977779 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:28.482604 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:28.570453 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:28.982525 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:29.066971 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:29.482030 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:29.566756 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:29.982733 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:30.081346 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:30.482885 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:30.570523 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:30.983073 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:31.068375 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:31.482928 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:31.573781 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:31.982395 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:32.073552 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:32.482020 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:32.572987 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:32.982652 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:33.070278 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:33.482683 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:33.568372 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:33.982536 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:34.069000 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:34.482116 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:34.573157 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:34.982775 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:35.073363 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:35.482109 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:35.579093 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:35.982692 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:36.076240 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:36.483056 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:36.582724 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:36.982100 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:37.082034 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:37.482985 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:37.568472 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:37.982761 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:38.091994 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:38.482624 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:38.581785 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:38.982395 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:39.078815 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:39.482091 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:39.572833 2804799 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0821 11:25:39.982096 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0821 11:25:40.148415 2804799 command_runner.go:130] > NAME      SECRETS   AGE
	I0821 11:25:40.148435 2804799 command_runner.go:130] > default   0         1s
	I0821 11:25:40.149857 2804799 kubeadm.go:1081] duration metric: took 12.548341246s to wait for elevateKubeSystemPrivileges.
	I0821 11:25:40.149893 2804799 kubeadm.go:406] StartCluster complete in 31.44786017s
	I0821 11:25:40.149912 2804799 settings.go:142] acquiring lock: {Name:mk3be5267b0ceee2c9bd00120994fcda13aa9019 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:25:40.149975 2804799 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:25:40.150673 2804799 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/kubeconfig: {Name:mk4bece1b106c2586469807b701290be2026992b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:25:40.151165 2804799 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:25:40.151452 2804799 kapi.go:59] client config for multinode-994910: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/client.crt", KeyFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/client.key", CAFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1721b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0821 11:25:40.152616 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0821 11:25:40.152626 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:40.152636 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:40.152643 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:40.152863 2804799 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0821 11:25:40.153131 2804799 config.go:182] Loaded profile config "multinode-994910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 11:25:40.153166 2804799 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0821 11:25:40.153226 2804799 addons.go:69] Setting storage-provisioner=true in profile "multinode-994910"
	I0821 11:25:40.153239 2804799 addons.go:231] Setting addon storage-provisioner=true in "multinode-994910"
	I0821 11:25:40.153299 2804799 host.go:66] Checking if "multinode-994910" exists ...
	I0821 11:25:40.153803 2804799 cli_runner.go:164] Run: docker container inspect multinode-994910 --format={{.State.Status}}
	I0821 11:25:40.154459 2804799 cert_rotation.go:137] Starting client certificate rotation controller
	I0821 11:25:40.154495 2804799 addons.go:69] Setting default-storageclass=true in profile "multinode-994910"
	I0821 11:25:40.154508 2804799 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-994910"
	I0821 11:25:40.154780 2804799 cli_runner.go:164] Run: docker container inspect multinode-994910 --format={{.State.Status}}
	I0821 11:25:40.206029 2804799 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0821 11:25:40.208232 2804799 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0821 11:25:40.208252 2804799 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0821 11:25:40.208324 2804799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910
	I0821 11:25:40.225124 2804799 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:25:40.225389 2804799 kapi.go:59] client config for multinode-994910: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/client.crt", KeyFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/client.key", CAFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1721b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0821 11:25:40.225727 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0821 11:25:40.225736 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:40.225745 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:40.225752 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:40.238825 2804799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36263 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910/id_rsa Username:docker}
	I0821 11:25:40.244231 2804799 round_trippers.go:574] Response Status: 200 OK in 91 milliseconds
	I0821 11:25:40.244254 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:40.244264 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:40.244271 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:40.244278 2804799 round_trippers.go:580]     Content-Length: 291
	I0821 11:25:40.244284 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:40 GMT
	I0821 11:25:40.244291 2804799 round_trippers.go:580]     Audit-Id: 41391ef3-836a-44ef-b432-dfc34bfec327
	I0821 11:25:40.244298 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:40.244304 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:40.252011 2804799 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0178f5ef-d7de-4a72-bc3c-366a7efa6d34","resourceVersion":"361","creationTimestamp":"2023-08-21T11:25:26Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0821 11:25:40.252439 2804799 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0178f5ef-d7de-4a72-bc3c-366a7efa6d34","resourceVersion":"361","creationTimestamp":"2023-08-21T11:25:26Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0821 11:25:40.252490 2804799 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0821 11:25:40.252496 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:40.252505 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:40.252512 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:40.252519 2804799 round_trippers.go:473]     Content-Type: application/json
	I0821 11:25:40.275220 2804799 round_trippers.go:574] Response Status: 200 OK in 49 milliseconds
	I0821 11:25:40.275243 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:40.275252 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:40.275259 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:40.275266 2804799 round_trippers.go:580]     Content-Length: 109
	I0821 11:25:40.275273 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:40 GMT
	I0821 11:25:40.275279 2804799 round_trippers.go:580]     Audit-Id: 96408828-2ca8-4e48-9de4-ab27554f92ea
	I0821 11:25:40.275286 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:40.275293 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:40.283853 2804799 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"375"},"items":[]}
	I0821 11:25:40.284151 2804799 addons.go:231] Setting addon default-storageclass=true in "multinode-994910"
	I0821 11:25:40.284181 2804799 host.go:66] Checking if "multinode-994910" exists ...
	I0821 11:25:40.284618 2804799 cli_runner.go:164] Run: docker container inspect multinode-994910 --format={{.State.Status}}
	I0821 11:25:40.300196 2804799 round_trippers.go:574] Response Status: 200 OK in 47 milliseconds
	I0821 11:25:40.300218 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:40.300226 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:40 GMT
	I0821 11:25:40.300233 2804799 round_trippers.go:580]     Audit-Id: e08b068f-a4df-4294-9a08-4d0af0bf7d98
	I0821 11:25:40.300240 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:40.300246 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:40.300253 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:40.300259 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:40.300266 2804799 round_trippers.go:580]     Content-Length: 291
	I0821 11:25:40.301039 2804799 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0178f5ef-d7de-4a72-bc3c-366a7efa6d34","resourceVersion":"378","creationTimestamp":"2023-08-21T11:25:26Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0821 11:25:40.301196 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0821 11:25:40.301203 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:40.301211 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:40.301218 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:40.327714 2804799 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0821 11:25:40.327735 2804799 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0821 11:25:40.327800 2804799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910
	I0821 11:25:40.341704 2804799 round_trippers.go:574] Response Status: 200 OK in 40 milliseconds
	I0821 11:25:40.341728 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:40.341737 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:40.341744 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:40.341751 2804799 round_trippers.go:580]     Content-Length: 291
	I0821 11:25:40.341757 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:40 GMT
	I0821 11:25:40.341764 2804799 round_trippers.go:580]     Audit-Id: ba2c8f1e-7ee1-43ac-bae1-6ea220de8238
	I0821 11:25:40.341771 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:40.341777 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:40.348809 2804799 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0178f5ef-d7de-4a72-bc3c-366a7efa6d34","resourceVersion":"378","creationTimestamp":"2023-08-21T11:25:26Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0821 11:25:40.348923 2804799 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-994910" context rescaled to 1 replicas
	I0821 11:25:40.348949 2804799 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0821 11:25:40.351796 2804799 out.go:177] * Verifying Kubernetes components...
	I0821 11:25:40.353865 2804799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 11:25:40.377382 2804799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36263 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910/id_rsa Username:docker}
	I0821 11:25:40.459779 2804799 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0821 11:25:40.489272 2804799 command_runner.go:130] > apiVersion: v1
	I0821 11:25:40.489331 2804799 command_runner.go:130] > data:
	I0821 11:25:40.489350 2804799 command_runner.go:130] >   Corefile: |
	I0821 11:25:40.489369 2804799 command_runner.go:130] >     .:53 {
	I0821 11:25:40.489388 2804799 command_runner.go:130] >         errors
	I0821 11:25:40.489419 2804799 command_runner.go:130] >         health {
	I0821 11:25:40.489444 2804799 command_runner.go:130] >            lameduck 5s
	I0821 11:25:40.489464 2804799 command_runner.go:130] >         }
	I0821 11:25:40.489484 2804799 command_runner.go:130] >         ready
	I0821 11:25:40.489518 2804799 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0821 11:25:40.489539 2804799 command_runner.go:130] >            pods insecure
	I0821 11:25:40.489559 2804799 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0821 11:25:40.489581 2804799 command_runner.go:130] >            ttl 30
	I0821 11:25:40.489618 2804799 command_runner.go:130] >         }
	I0821 11:25:40.489641 2804799 command_runner.go:130] >         prometheus :9153
	I0821 11:25:40.489662 2804799 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0821 11:25:40.489683 2804799 command_runner.go:130] >            max_concurrent 1000
	I0821 11:25:40.489702 2804799 command_runner.go:130] >         }
	I0821 11:25:40.489728 2804799 command_runner.go:130] >         cache 30
	I0821 11:25:40.489752 2804799 command_runner.go:130] >         loop
	I0821 11:25:40.489771 2804799 command_runner.go:130] >         reload
	I0821 11:25:40.489790 2804799 command_runner.go:130] >         loadbalance
	I0821 11:25:40.489809 2804799 command_runner.go:130] >     }
	I0821 11:25:40.489838 2804799 command_runner.go:130] > kind: ConfigMap
	I0821 11:25:40.489865 2804799 command_runner.go:130] > metadata:
	I0821 11:25:40.489971 2804799 command_runner.go:130] >   creationTimestamp: "2023-08-21T11:25:26Z"
	I0821 11:25:40.489995 2804799 command_runner.go:130] >   name: coredns
	I0821 11:25:40.490015 2804799 command_runner.go:130] >   namespace: kube-system
	I0821 11:25:40.490035 2804799 command_runner.go:130] >   resourceVersion: "254"
	I0821 11:25:40.490058 2804799 command_runner.go:130] >   uid: 5d8159d1-20c9-495f-afdb-85f0b65c7e39
	I0821 11:25:40.496536 2804799 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0821 11:25:40.497002 2804799 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:25:40.497300 2804799 kapi.go:59] client config for multinode-994910: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/client.crt", KeyFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/client.key", CAFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1721b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0821 11:25:40.497621 2804799 node_ready.go:35] waiting up to 6m0s for node "multinode-994910" to be "Ready" ...
	I0821 11:25:40.497710 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:40.497741 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:40.497769 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:40.497788 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:40.564260 2804799 round_trippers.go:574] Response Status: 200 OK in 66 milliseconds
	I0821 11:25:40.564333 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:40.564355 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:40.564377 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:40.564410 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:40.564439 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:40.564462 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:40 GMT
	I0821 11:25:40.564482 2804799 round_trippers.go:580]     Audit-Id: 16c859e0-995c-475a-94db-579376397265
	I0821 11:25:40.582397 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:40.583189 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:40.583230 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:40.583253 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:40.583277 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:40.627362 2804799 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0821 11:25:40.708743 2804799 round_trippers.go:574] Response Status: 200 OK in 125 milliseconds
	I0821 11:25:40.708806 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:40.708831 2804799 round_trippers.go:580]     Audit-Id: 2aa64c26-2539-4f60-9230-d7891293fede
	I0821 11:25:40.708854 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:40.708894 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:40.708921 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:40.708944 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:40.708966 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:40 GMT
	I0821 11:25:40.710508 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:41.211398 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:41.211460 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:41.211482 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:41.211504 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:41.217802 2804799 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0821 11:25:41.217868 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:41.217907 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:41.217927 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:41.217961 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:41 GMT
	I0821 11:25:41.217981 2804799 round_trippers.go:580]     Audit-Id: f56e1520-d61f-45e9-af2c-a94115a8a374
	I0821 11:25:41.218002 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:41.218022 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:41.218511 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:41.444403 2804799 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0821 11:25:41.454738 2804799 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0821 11:25:41.464484 2804799 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0821 11:25:41.475631 2804799 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0821 11:25:41.487692 2804799 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0821 11:25:41.498137 2804799 command_runner.go:130] > pod/storage-provisioner created
	I0821 11:25:41.503376 2804799 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.043523432s)
	I0821 11:25:41.503467 2804799 command_runner.go:130] > configmap/coredns replaced
	I0821 11:25:41.503589 2804799 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0821 11:25:41.505904 2804799 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0821 11:25:41.503797 2804799 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.006974972s)
	I0821 11:25:41.507701 2804799 start.go:901] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0821 11:25:41.507717 2804799 addons.go:502] enable addons completed in 1.354549189s: enabled=[storage-provisioner default-storageclass]
	I0821 11:25:41.711822 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:41.711844 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:41.711853 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:41.711861 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:41.718016 2804799 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0821 11:25:41.718088 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:41.718123 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:41.718150 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:41.718211 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:41.718235 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:41 GMT
	I0821 11:25:41.718258 2804799 round_trippers.go:580]     Audit-Id: 22101ad6-1304-4d66-8df7-4fcc71e7c158
	I0821 11:25:41.718292 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:41.718526 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:42.211471 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:42.211494 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:42.211505 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:42.211512 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:42.214486 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:42.214559 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:42.214581 2804799 round_trippers.go:580]     Audit-Id: 719212c2-2f5b-46e1-9944-0abb92ba1131
	I0821 11:25:42.214605 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:42.214641 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:42.214650 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:42.214668 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:42.214682 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:42 GMT
	I0821 11:25:42.214826 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:42.711183 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:42.711207 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:42.711218 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:42.711225 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:42.713866 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:42.713987 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:42.713998 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:42.714005 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:42 GMT
	I0821 11:25:42.714012 2804799 round_trippers.go:580]     Audit-Id: 1db83816-2722-478b-b94c-70c9d76f1f54
	I0821 11:25:42.714018 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:42.714025 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:42.714032 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:42.714211 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:42.714690 2804799 node_ready.go:58] node "multinode-994910" has status "Ready":"False"
	I0821 11:25:43.211688 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:43.211711 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:43.211725 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:43.211732 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:43.214350 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:43.214374 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:43.214383 2804799 round_trippers.go:580]     Audit-Id: c35e9f5b-281c-49b0-bf6f-9fb1ec4adc2a
	I0821 11:25:43.214390 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:43.214405 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:43.214416 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:43.214423 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:43.214432 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:43 GMT
	I0821 11:25:43.214544 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:43.712107 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:43.712128 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:43.712140 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:43.712148 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:43.714657 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:43.714681 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:43.714690 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:43.714697 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:43 GMT
	I0821 11:25:43.714731 2804799 round_trippers.go:580]     Audit-Id: d3308ee1-88a8-4610-b711-ad779f33274e
	I0821 11:25:43.714738 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:43.714749 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:43.714759 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:43.714957 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:44.211184 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:44.211207 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:44.211221 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:44.211230 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:44.213750 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:44.213771 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:44.213780 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:44.213787 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:44.213794 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:44.213801 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:44 GMT
	I0821 11:25:44.213807 2804799 round_trippers.go:580]     Audit-Id: 1f7e2204-0228-48ef-b015-9a13b043326d
	I0821 11:25:44.213814 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:44.214053 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:44.711174 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:44.711197 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:44.711208 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:44.711215 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:44.713244 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:44.713266 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:44.713275 2804799 round_trippers.go:580]     Audit-Id: 56b4bcac-1622-49d5-b4a6-baa3f11f26a5
	I0821 11:25:44.713282 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:44.713288 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:44.713295 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:44.713308 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:44.713316 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:44 GMT
	I0821 11:25:44.713453 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:45.211579 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:45.211607 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:45.211618 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:45.211626 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:45.214328 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:45.214353 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:45.214363 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:45 GMT
	I0821 11:25:45.214370 2804799 round_trippers.go:580]     Audit-Id: 28498f97-5433-4463-8eb1-b259da636272
	I0821 11:25:45.214377 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:45.214384 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:45.214390 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:45.214397 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:45.214511 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:45.215237 2804799 node_ready.go:58] node "multinode-994910" has status "Ready":"False"
	I0821 11:25:45.711157 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:45.711178 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:45.711188 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:45.711195 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:45.713672 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:45.713695 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:45.713703 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:45.713710 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:45.713717 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:45.713723 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:45 GMT
	I0821 11:25:45.713730 2804799 round_trippers.go:580]     Audit-Id: afb8229d-e345-414f-92d2-2f00a72d427f
	I0821 11:25:45.713737 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:45.713850 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:46.211348 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:46.211372 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:46.211383 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:46.211390 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:46.213845 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:46.213869 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:46.213896 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:46.213903 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:46.213911 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:46.213920 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:46 GMT
	I0821 11:25:46.213927 2804799 round_trippers.go:580]     Audit-Id: 69df0f22-ee33-4433-9005-d8eb14120f5d
	I0821 11:25:46.213936 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:46.214246 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:46.712009 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:46.712032 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:46.712041 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:46.712049 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:46.714492 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:46.714515 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:46.714523 2804799 round_trippers.go:580]     Audit-Id: cbf3dee4-1d46-4ea7-85f6-10e8d079dae7
	I0821 11:25:46.714530 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:46.714537 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:46.714548 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:46.714565 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:46.714572 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:46 GMT
	I0821 11:25:46.714779 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:47.211498 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:47.211523 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:47.211533 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:47.211541 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:47.214081 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:47.214104 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:47.214112 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:47.214119 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:47.214126 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:47.214138 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:47 GMT
	I0821 11:25:47.214150 2804799 round_trippers.go:580]     Audit-Id: 2af2fb8e-a843-445c-8e88-be35a14e69d5
	I0821 11:25:47.214162 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:47.214512 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:47.711436 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:47.711461 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:47.711472 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:47.711486 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:47.714322 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:47.714347 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:47.714356 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:47.714364 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:47.714370 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:47.714377 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:47.714384 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:47 GMT
	I0821 11:25:47.714395 2804799 round_trippers.go:580]     Audit-Id: bda57899-f7dd-4a09-a6fa-d00a89c3c632
	I0821 11:25:47.714535 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:47.714928 2804799 node_ready.go:58] node "multinode-994910" has status "Ready":"False"
	I0821 11:25:48.211600 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:48.211624 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:48.211640 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:48.211647 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:48.215314 2804799 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0821 11:25:48.215337 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:48.215346 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:48 GMT
	I0821 11:25:48.215354 2804799 round_trippers.go:580]     Audit-Id: 41cdcbb2-05f5-4b71-aaa5-9e064eb12064
	I0821 11:25:48.215360 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:48.215367 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:48.215373 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:48.215381 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:48.215464 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:48.712093 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:48.712118 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:48.712129 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:48.712137 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:48.715186 2804799 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0821 11:25:48.715211 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:48.715220 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:48.715227 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:48.715234 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:48.715241 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:48 GMT
	I0821 11:25:48.715252 2804799 round_trippers.go:580]     Audit-Id: 293fc914-bc33-412d-a83f-ccc2cd7fd93f
	I0821 11:25:48.715259 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:48.715372 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:49.211466 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:49.211492 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:49.211503 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:49.211514 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:49.214030 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:49.214060 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:49.214069 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:49.214078 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:49 GMT
	I0821 11:25:49.214085 2804799 round_trippers.go:580]     Audit-Id: d67af80d-47cb-4de7-875e-ab7d21bd1489
	I0821 11:25:49.214091 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:49.214102 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:49.214117 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:49.214215 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:49.711728 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:49.711749 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:49.711760 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:49.711767 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:49.714319 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:49.714411 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:49.714433 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:49.714470 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:49.714496 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:49.714512 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:49 GMT
	I0821 11:25:49.714519 2804799 round_trippers.go:580]     Audit-Id: 8a344b14-5b01-44a4-8e1c-3d7f745c3005
	I0821 11:25:49.714526 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:49.714674 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:49.715111 2804799 node_ready.go:58] node "multinode-994910" has status "Ready":"False"
	I0821 11:25:50.211891 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:50.211914 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:50.211924 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:50.211932 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:50.214555 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:50.214584 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:50.214593 2804799 round_trippers.go:580]     Audit-Id: 436d9176-e960-46ad-a9cd-d221b9f6798c
	I0821 11:25:50.214600 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:50.214607 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:50.214614 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:50.214623 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:50.214630 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:50 GMT
	I0821 11:25:50.214725 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:50.711890 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:50.711933 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:50.711943 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:50.711951 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:50.714448 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:50.714473 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:50.714485 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:50.714492 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:50.714499 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:50.714506 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:50.714522 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:50 GMT
	I0821 11:25:50.714538 2804799 round_trippers.go:580]     Audit-Id: 7e29d6ad-c0d0-4d94-88fd-4f519ae2f71d
	I0821 11:25:50.714867 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:51.211439 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:51.211464 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:51.211474 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:51.211481 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:51.213917 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:51.213939 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:51.213948 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:51.213955 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:51.213962 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:51 GMT
	I0821 11:25:51.213969 2804799 round_trippers.go:580]     Audit-Id: fef46480-19b2-4304-b6b6-a40b6fa0cdbf
	I0821 11:25:51.213975 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:51.213984 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:51.214086 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:51.711175 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:51.711196 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:51.711206 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:51.711214 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:51.713680 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:51.713705 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:51.713714 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:51 GMT
	I0821 11:25:51.713721 2804799 round_trippers.go:580]     Audit-Id: 0bbe87ef-e6a3-492f-9637-c5c2c58697f2
	I0821 11:25:51.713727 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:51.713737 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:51.713744 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:51.713753 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:51.713914 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:52.212038 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:52.212061 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:52.212072 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:52.212081 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:52.214618 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:52.214644 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:52.214654 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:52.214661 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:52.214668 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:52.214675 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:52 GMT
	I0821 11:25:52.214682 2804799 round_trippers.go:580]     Audit-Id: 138ea582-4559-41ff-96b8-bda3bb75a50e
	I0821 11:25:52.214689 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:52.214785 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:52.215188 2804799 node_ready.go:58] node "multinode-994910" has status "Ready":"False"
	I0821 11:25:52.711891 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:52.711915 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:52.711925 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:52.711933 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:52.714348 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:52.714367 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:52.714376 2804799 round_trippers.go:580]     Audit-Id: 32154986-f60e-4aba-a914-a171d062a5a5
	I0821 11:25:52.714383 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:52.714389 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:52.714399 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:52.714406 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:52.714413 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:52 GMT
	I0821 11:25:52.714501 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:53.211696 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:53.211719 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:53.211730 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:53.211737 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:53.214227 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:53.214260 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:53.214269 2804799 round_trippers.go:580]     Audit-Id: 1d4d0254-0f01-4a33-95af-1fc0d5e3e1e0
	I0821 11:25:53.214276 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:53.214287 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:53.214304 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:53.214312 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:53.214323 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:53 GMT
	I0821 11:25:53.214596 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:53.711190 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:53.711216 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:53.711227 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:53.711235 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:53.713713 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:53.713736 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:53.713745 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:53.713752 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:53 GMT
	I0821 11:25:53.713759 2804799 round_trippers.go:580]     Audit-Id: fb9d32e0-2581-4931-89d2-4c85ebc16d9e
	I0821 11:25:53.713768 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:53.713775 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:53.713784 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:53.713942 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:54.212101 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:54.212124 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:54.212135 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:54.212142 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:54.215189 2804799 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0821 11:25:54.215214 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:54.215224 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:54.215231 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:54.215238 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:54.215245 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:54 GMT
	I0821 11:25:54.215252 2804799 round_trippers.go:580]     Audit-Id: 62dc1585-e69f-4020-88b0-5b9e13e1f9c9
	I0821 11:25:54.215259 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:54.215351 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:54.215752 2804799 node_ready.go:58] node "multinode-994910" has status "Ready":"False"
	I0821 11:25:54.711165 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:54.711189 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:54.711199 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:54.711206 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:54.713359 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:54.713387 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:54.713398 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:54 GMT
	I0821 11:25:54.713405 2804799 round_trippers.go:580]     Audit-Id: d8453a7b-6923-4c12-b0ab-82d2dae8d37c
	I0821 11:25:54.713412 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:54.713422 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:54.713436 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:54.713443 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:54.713747 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:55.211216 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:55.211240 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:55.211251 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:55.211258 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:55.213983 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:55.214008 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:55.214018 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:55 GMT
	I0821 11:25:55.214025 2804799 round_trippers.go:580]     Audit-Id: dcfc634d-bf42-423b-bd8a-bef6efc89e3f
	I0821 11:25:55.214031 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:55.214038 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:55.214046 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:55.214056 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:55.214172 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:55.711780 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:55.711804 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:55.711815 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:55.711822 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:55.714301 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:55.714323 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:55.714331 2804799 round_trippers.go:580]     Audit-Id: 6e3c40e2-c1a1-46ac-9326-c553cecbe543
	I0821 11:25:55.714338 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:55.714344 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:55.714351 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:55.714358 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:55.714365 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:55 GMT
	I0821 11:25:55.714493 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:56.211536 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:56.211561 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:56.211571 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:56.211579 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:56.214077 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:56.214104 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:56.214112 2804799 round_trippers.go:580]     Audit-Id: d0549e14-704a-4483-8d85-4bbac3f42328
	I0821 11:25:56.214119 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:56.214128 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:56.214134 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:56.214141 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:56.214148 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:56 GMT
	I0821 11:25:56.214237 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:56.711840 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:56.711862 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:56.711872 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:56.711879 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:56.714433 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:56.714456 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:56.714465 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:56.714472 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:56.714479 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:56 GMT
	I0821 11:25:56.714486 2804799 round_trippers.go:580]     Audit-Id: 18672ff2-f62b-45b5-a89d-fadfb35f46ec
	I0821 11:25:56.714492 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:56.714499 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:56.714618 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:56.715076 2804799 node_ready.go:58] node "multinode-994910" has status "Ready":"False"
	I0821 11:25:57.211815 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:57.211838 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:57.211849 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:57.211856 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:57.214280 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:57.214304 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:57.214312 2804799 round_trippers.go:580]     Audit-Id: 39a4924f-cade-431e-a946-9839b3fe6934
	I0821 11:25:57.214320 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:57.214327 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:57.214334 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:57.214343 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:57.214353 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:57 GMT
	I0821 11:25:57.214603 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:57.711186 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:57.711207 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:57.711217 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:57.711225 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:57.713565 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:57.713590 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:57.713604 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:57.713611 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:57.713617 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:57.713624 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:57 GMT
	I0821 11:25:57.713632 2804799 round_trippers.go:580]     Audit-Id: 401e1d5b-a5b7-4ce6-b6d1-a3208b225b65
	I0821 11:25:57.713639 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:57.713799 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:58.211383 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:58.211407 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:58.211417 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:58.211425 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:58.213865 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:58.213907 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:58.213916 2804799 round_trippers.go:580]     Audit-Id: eebf60f0-0cb8-4ab2-9b65-915455642779
	I0821 11:25:58.213923 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:58.213934 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:58.213941 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:58.213954 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:58.213962 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:58 GMT
	I0821 11:25:58.214208 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:58.711260 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:58.711281 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:58.711291 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:58.711299 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:58.713730 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:58.713754 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:58.713766 2804799 round_trippers.go:580]     Audit-Id: 6ed11b04-c5f6-4bce-8b09-6288332c34ae
	I0821 11:25:58.713777 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:58.713783 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:58.713790 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:58.713796 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:58.713807 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:58 GMT
	I0821 11:25:58.713978 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:59.211097 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:59.211122 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:59.211132 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:59.211140 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:59.213511 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:59.213535 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:59.213548 2804799 round_trippers.go:580]     Audit-Id: d0cb687b-109d-4ef1-8279-4f99c857a19b
	I0821 11:25:59.213556 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:59.213563 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:59.213569 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:59.213579 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:59.213652 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:59 GMT
	I0821 11:25:59.213939 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:25:59.214341 2804799 node_ready.go:58] node "multinode-994910" has status "Ready":"False"
	I0821 11:25:59.711185 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:25:59.711210 2804799 round_trippers.go:469] Request Headers:
	I0821 11:25:59.711221 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:25:59.711228 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:25:59.713644 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:25:59.713664 2804799 round_trippers.go:577] Response Headers:
	I0821 11:25:59.713672 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:25:59.713679 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:25:59.713701 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:25:59 GMT
	I0821 11:25:59.713721 2804799 round_trippers.go:580]     Audit-Id: 0425c729-7511-4ed7-a9a5-22b669a3b092
	I0821 11:25:59.713728 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:25:59.713734 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:25:59.713870 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:00.211608 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:00.211635 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:00.211647 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:00.211655 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:00.214407 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:00.214435 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:00.214445 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:00.214452 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:00.214459 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:00.214466 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:00 GMT
	I0821 11:26:00.214472 2804799 round_trippers.go:580]     Audit-Id: 6f50d1c4-a5fb-4617-84cf-540606abac5f
	I0821 11:26:00.214480 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:00.214589 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:00.712117 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:00.712141 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:00.712152 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:00.712159 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:00.714589 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:00.714615 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:00.714624 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:00 GMT
	I0821 11:26:00.714631 2804799 round_trippers.go:580]     Audit-Id: 1cf04721-deb4-4bb7-ba41-0c890a739e72
	I0821 11:26:00.714638 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:00.714647 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:00.714654 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:00.714668 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:00.714908 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:01.211514 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:01.211539 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:01.211551 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:01.211564 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:01.214306 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:01.214329 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:01.214338 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:01.214346 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:01.214353 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:01 GMT
	I0821 11:26:01.214359 2804799 round_trippers.go:580]     Audit-Id: 65274125-97c7-43eb-ad66-3f6305d9ee73
	I0821 11:26:01.214367 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:01.214387 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:01.214802 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:01.215201 2804799 node_ready.go:58] node "multinode-994910" has status "Ready":"False"
	I0821 11:26:01.711971 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:01.711993 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:01.712003 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:01.712011 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:01.714441 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:01.714463 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:01.714472 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:01.714479 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:01.714486 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:01.714493 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:01 GMT
	I0821 11:26:01.714500 2804799 round_trippers.go:580]     Audit-Id: 88a06f80-a617-4842-b78d-6b5c28e2e7e4
	I0821 11:26:01.714506 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:01.714659 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:02.211241 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:02.211266 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:02.211276 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:02.211284 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:02.213804 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:02.213834 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:02.213844 2804799 round_trippers.go:580]     Audit-Id: c43d0836-96c3-415e-9183-c0c13e639b9a
	I0821 11:26:02.213850 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:02.213857 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:02.213864 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:02.213891 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:02.213902 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:02 GMT
	I0821 11:26:02.214012 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:02.711174 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:02.711196 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:02.711206 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:02.711214 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:02.713635 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:02.713658 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:02.713666 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:02.713673 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:02.713680 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:02.713687 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:02.713694 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:02 GMT
	I0821 11:26:02.713700 2804799 round_trippers.go:580]     Audit-Id: 5b7a5292-178b-4e37-9fde-7c97b4a48000
	I0821 11:26:02.713811 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:03.211194 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:03.211218 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:03.211229 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:03.211236 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:03.213766 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:03.213794 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:03.213803 2804799 round_trippers.go:580]     Audit-Id: b4d19696-a82d-4202-bd0f-8d3678c183cf
	I0821 11:26:03.213812 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:03.213819 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:03.213825 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:03.213832 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:03.213838 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:03 GMT
	I0821 11:26:03.213951 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:03.711174 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:03.711195 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:03.711206 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:03.711215 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:03.713783 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:03.713806 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:03.713815 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:03.713822 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:03.713828 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:03.713835 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:03 GMT
	I0821 11:26:03.713842 2804799 round_trippers.go:580]     Audit-Id: 3ee67421-a1af-49c6-9150-ca04fab34322
	I0821 11:26:03.713849 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:03.713988 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:03.714398 2804799 node_ready.go:58] node "multinode-994910" has status "Ready":"False"
	I0821 11:26:04.211077 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:04.211101 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:04.211111 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:04.211119 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:04.213741 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:04.213766 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:04.213776 2804799 round_trippers.go:580]     Audit-Id: 319e5050-df37-41fe-9841-f02742939a50
	I0821 11:26:04.213783 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:04.213790 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:04.213797 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:04.213807 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:04.213824 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:04 GMT
	I0821 11:26:04.214142 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:04.711746 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:04.711769 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:04.711780 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:04.711787 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:04.714036 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:04.714056 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:04.714064 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:04.714071 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:04.714078 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:04.714085 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:04 GMT
	I0821 11:26:04.714092 2804799 round_trippers.go:580]     Audit-Id: d66cb6ad-7bf3-45de-af81-53d842d5d1de
	I0821 11:26:04.714098 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:04.714224 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:05.211478 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:05.211504 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:05.211514 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:05.211522 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:05.213990 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:05.214011 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:05.214020 2804799 round_trippers.go:580]     Audit-Id: 0febb91f-b4f0-4a29-a445-549408598f0e
	I0821 11:26:05.214026 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:05.214033 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:05.214039 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:05.214046 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:05.214053 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:05 GMT
	I0821 11:26:05.214134 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:05.711406 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:05.711429 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:05.711439 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:05.711448 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:05.713938 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:05.713963 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:05.713972 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:05.713979 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:05.713986 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:05.713993 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:05.713999 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:05 GMT
	I0821 11:26:05.714006 2804799 round_trippers.go:580]     Audit-Id: 6cb107a6-53e1-45a4-8931-1d9211ca9d4f
	I0821 11:26:05.714302 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:05.714713 2804799 node_ready.go:58] node "multinode-994910" has status "Ready":"False"
	I0821 11:26:06.212002 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:06.212026 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:06.212036 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:06.212044 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:06.214828 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:06.214850 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:06.214859 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:06.214865 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:06.214872 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:06.214879 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:06 GMT
	I0821 11:26:06.214885 2804799 round_trippers.go:580]     Audit-Id: f11b2511-a390-4e24-a3ba-ef32947747be
	I0821 11:26:06.214892 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:06.215009 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:06.711578 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:06.711601 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:06.711611 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:06.711618 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:06.714084 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:06.714103 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:06.714112 2804799 round_trippers.go:580]     Audit-Id: 19f3425b-bc03-45ec-ba9f-670ac0b598a4
	I0821 11:26:06.714118 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:06.714125 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:06.714132 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:06.714138 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:06.714145 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:06 GMT
	I0821 11:26:06.714324 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:07.211362 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:07.211386 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:07.211397 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:07.211405 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:07.213961 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:07.213982 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:07.213991 2804799 round_trippers.go:580]     Audit-Id: 4b99947a-0676-467d-87ec-f0a6c629e984
	I0821 11:26:07.214000 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:07.214007 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:07.214014 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:07.214022 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:07.214036 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:07 GMT
	I0821 11:26:07.214137 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:07.711217 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:07.711242 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:07.711253 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:07.711261 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:07.713761 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:07.713794 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:07.713802 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:07.713809 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:07.713816 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:07 GMT
	I0821 11:26:07.713825 2804799 round_trippers.go:580]     Audit-Id: 72ec17b7-4cae-4179-a49a-283c2b49824d
	I0821 11:26:07.713832 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:07.713838 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:07.713993 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:08.211623 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:08.211644 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:08.211654 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:08.211662 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:08.214220 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:08.214252 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:08.214261 2804799 round_trippers.go:580]     Audit-Id: 0e3090cf-37ae-4b72-a1f2-0d0466925469
	I0821 11:26:08.214268 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:08.214274 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:08.214281 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:08.214287 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:08.214297 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:08 GMT
	I0821 11:26:08.214407 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:08.214812 2804799 node_ready.go:58] node "multinode-994910" has status "Ready":"False"
	I0821 11:26:08.711681 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:08.711702 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:08.711713 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:08.711720 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:08.714210 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:08.714242 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:08.714254 2804799 round_trippers.go:580]     Audit-Id: 0d69b84f-587a-48ac-8b28-d49fc0161d9c
	I0821 11:26:08.714262 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:08.714269 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:08.714280 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:08.714293 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:08.714301 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:08 GMT
	I0821 11:26:08.714542 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:09.211147 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:09.211174 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:09.211185 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:09.211192 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:09.213747 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:09.213766 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:09.213774 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:09.213781 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:09 GMT
	I0821 11:26:09.213787 2804799 round_trippers.go:580]     Audit-Id: 233c9d0a-987d-47ee-85c4-5a8535e43b44
	I0821 11:26:09.213794 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:09.213800 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:09.213806 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:09.213943 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:09.712047 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:09.712069 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:09.712080 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:09.712088 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:09.714662 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:09.714684 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:09.714693 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:09.714699 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:09.714706 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:09.714712 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:09.714720 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:09 GMT
	I0821 11:26:09.714726 2804799 round_trippers.go:580]     Audit-Id: 4c4e5e9a-ab09-4a80-8177-95c1dfaa2eb8
	I0821 11:26:09.714848 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:10.211513 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:10.211536 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:10.211546 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:10.211554 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:10.214060 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:10.214083 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:10.214092 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:10.214105 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:10.214112 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:10.214122 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:10 GMT
	I0821 11:26:10.214129 2804799 round_trippers.go:580]     Audit-Id: 8dc8697a-a33c-4002-831f-30f8a3f313de
	I0821 11:26:10.214138 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:10.214656 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:10.215073 2804799 node_ready.go:58] node "multinode-994910" has status "Ready":"False"
	I0821 11:26:10.711256 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:10.711279 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:10.711289 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:10.711296 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:10.713793 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:10.713813 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:10.713821 2804799 round_trippers.go:580]     Audit-Id: e3c07dd0-c1e2-4284-96c7-4670408234b6
	I0821 11:26:10.713828 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:10.713835 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:10.713841 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:10.713848 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:10.713855 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:10 GMT
	I0821 11:26:10.714032 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:11.211287 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:11.211311 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:11.211321 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:11.211329 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:11.213755 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:11.213777 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:11.213786 2804799 round_trippers.go:580]     Audit-Id: a98a2143-c530-47ff-a7f3-85c5ccad83ce
	I0821 11:26:11.213792 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:11.213799 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:11.213805 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:11.213812 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:11.213819 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:11 GMT
	I0821 11:26:11.213929 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"342","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0821 11:26:11.712095 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:11.712121 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:11.712132 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:11.712139 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:11.714576 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:11.714598 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:11.714609 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:11.714616 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:11.714622 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:11.714628 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:11.714635 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:11 GMT
	I0821 11:26:11.714641 2804799 round_trippers.go:580]     Audit-Id: 53d8351e-c436-4595-a632-bd61695fb3d8
	I0821 11:26:11.714760 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"435","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0821 11:26:11.715134 2804799 node_ready.go:49] node "multinode-994910" has status "Ready":"True"
	I0821 11:26:11.715143 2804799 node_ready.go:38] duration metric: took 31.21748599s waiting for node "multinode-994910" to be "Ready" ...
	I0821 11:26:11.715151 2804799 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 11:26:11.715237 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0821 11:26:11.715242 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:11.715250 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:11.715256 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:11.718431 2804799 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0821 11:26:11.718448 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:11.718456 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:11 GMT
	I0821 11:26:11.718463 2804799 round_trippers.go:580]     Audit-Id: 55a04344-500e-4df9-9861-5d85b0fa47fc
	I0821 11:26:11.718469 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:11.718476 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:11.718482 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:11.718489 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:11.718824 2804799 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"441"},"items":[{"metadata":{"name":"coredns-5d78c9869d-zj5f8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"b6aeac2c-fd47-4855-8a60-675aa03078a6","resourceVersion":"441","creationTimestamp":"2023-08-21T11:25:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"6f5458c4-0287-4acb-a4c3-19fd45c7091a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:25:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f5458c4-0287-4acb-a4c3-19fd45c7091a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55535 chars]
	I0821 11:26:11.722854 2804799 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-zj5f8" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:11.722932 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zj5f8
	I0821 11:26:11.722943 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:11.722952 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:11.722960 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:11.725257 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:11.725273 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:11.725281 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:11.725288 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:11.725295 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:11.725304 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:11 GMT
	I0821 11:26:11.725311 2804799 round_trippers.go:580]     Audit-Id: c2485ac4-c889-404e-a10c-53b49415bd42
	I0821 11:26:11.725317 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:11.725398 2804799 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zj5f8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"b6aeac2c-fd47-4855-8a60-675aa03078a6","resourceVersion":"441","creationTimestamp":"2023-08-21T11:25:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"6f5458c4-0287-4acb-a4c3-19fd45c7091a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:25:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f5458c4-0287-4acb-a4c3-19fd45c7091a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0821 11:26:11.725852 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:11.725859 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:11.725867 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:11.725924 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:11.728008 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:11.728025 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:11.728033 2804799 round_trippers.go:580]     Audit-Id: e67b3fdd-0c50-4b92-ad12-e1ce662aca37
	I0821 11:26:11.728040 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:11.728046 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:11.728053 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:11.728060 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:11.728070 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:11 GMT
	I0821 11:26:11.728196 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"435","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0821 11:26:11.728596 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zj5f8
	I0821 11:26:11.728603 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:11.728611 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:11.728618 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:11.730665 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:11.730684 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:11.730693 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:11.730699 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:11.730706 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:11 GMT
	I0821 11:26:11.730712 2804799 round_trippers.go:580]     Audit-Id: d460f743-0ac1-433c-b4f8-c804a41f2f2b
	I0821 11:26:11.730719 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:11.730727 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:11.730819 2804799 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zj5f8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"b6aeac2c-fd47-4855-8a60-675aa03078a6","resourceVersion":"441","creationTimestamp":"2023-08-21T11:25:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"6f5458c4-0287-4acb-a4c3-19fd45c7091a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:25:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f5458c4-0287-4acb-a4c3-19fd45c7091a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0821 11:26:11.731261 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:11.731276 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:11.731285 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:11.731293 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:11.733453 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:11.733486 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:11.733495 2804799 round_trippers.go:580]     Audit-Id: 21f608e5-aa1a-475b-aa41-e767a905e484
	I0821 11:26:11.733502 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:11.733508 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:11.733519 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:11.733534 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:11.733541 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:11 GMT
	I0821 11:26:11.733663 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"435","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0821 11:26:12.234822 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zj5f8
	I0821 11:26:12.234847 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:12.234857 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:12.234865 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:12.237578 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:12.237603 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:12.237612 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:12.237619 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:12.237626 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:12.237632 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:12 GMT
	I0821 11:26:12.237639 2804799 round_trippers.go:580]     Audit-Id: 205b9864-2e58-4f8a-9990-e9e0497e56d6
	I0821 11:26:12.237645 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:12.237761 2804799 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zj5f8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"b6aeac2c-fd47-4855-8a60-675aa03078a6","resourceVersion":"441","creationTimestamp":"2023-08-21T11:25:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"6f5458c4-0287-4acb-a4c3-19fd45c7091a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:25:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f5458c4-0287-4acb-a4c3-19fd45c7091a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0821 11:26:12.238304 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:12.238313 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:12.238321 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:12.238329 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:12.240586 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:12.240602 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:12.240611 2804799 round_trippers.go:580]     Audit-Id: c74e06b0-5f65-4c6e-a15d-07f74fb77207
	I0821 11:26:12.240617 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:12.240623 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:12.240630 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:12.240674 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:12.240685 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:12 GMT
	I0821 11:26:12.240856 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"435","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0821 11:26:12.734958 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zj5f8
	I0821 11:26:12.734986 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:12.734995 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:12.735003 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:12.737645 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:12.737703 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:12.737724 2804799 round_trippers.go:580]     Audit-Id: c74119b1-f5d7-4db8-8add-cecf4ab79a24
	I0821 11:26:12.737745 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:12.737783 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:12.737807 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:12.737828 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:12.737842 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:12 GMT
	I0821 11:26:12.737993 2804799 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zj5f8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"b6aeac2c-fd47-4855-8a60-675aa03078a6","resourceVersion":"441","creationTimestamp":"2023-08-21T11:25:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"6f5458c4-0287-4acb-a4c3-19fd45c7091a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:25:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f5458c4-0287-4acb-a4c3-19fd45c7091a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0821 11:26:12.738523 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:12.738536 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:12.738544 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:12.738553 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:12.740920 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:12.740975 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:12.740996 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:12.741016 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:12.741053 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:12.741080 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:12 GMT
	I0821 11:26:12.741101 2804799 round_trippers.go:580]     Audit-Id: ee79052c-10bc-4419-ac14-5b058c05a47e
	I0821 11:26:12.741126 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:12.741267 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"435","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0821 11:26:13.234337 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zj5f8
	I0821 11:26:13.234359 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:13.234369 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:13.234376 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:13.236850 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:13.236916 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:13.236938 2804799 round_trippers.go:580]     Audit-Id: 150a2374-3037-49f1-9d4f-c99debaba99b
	I0821 11:26:13.236961 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:13.237044 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:13.237060 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:13.237068 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:13.237077 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:13 GMT
	I0821 11:26:13.237198 2804799 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zj5f8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"b6aeac2c-fd47-4855-8a60-675aa03078a6","resourceVersion":"454","creationTimestamp":"2023-08-21T11:25:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"6f5458c4-0287-4acb-a4c3-19fd45c7091a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:25:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f5458c4-0287-4acb-a4c3-19fd45c7091a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0821 11:26:13.237769 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:13.237785 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:13.237793 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:13.237801 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:13.240159 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:13.240181 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:13.240189 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:13.240196 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:13 GMT
	I0821 11:26:13.240202 2804799 round_trippers.go:580]     Audit-Id: cc92e2e0-645a-46a5-a173-6d2d299709c1
	I0821 11:26:13.240209 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:13.240215 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:13.240225 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:13.240382 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"435","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0821 11:26:13.240778 2804799 pod_ready.go:92] pod "coredns-5d78c9869d-zj5f8" in "kube-system" namespace has status "Ready":"True"
	I0821 11:26:13.240795 2804799 pod_ready.go:81] duration metric: took 1.51791799s waiting for pod "coredns-5d78c9869d-zj5f8" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:13.240806 2804799 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-994910" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:13.240868 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-994910
	I0821 11:26:13.240878 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:13.240886 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:13.240893 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:13.243277 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:13.243298 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:13.243311 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:13.243318 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:13.243330 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:13.243341 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:13 GMT
	I0821 11:26:13.243357 2804799 round_trippers.go:580]     Audit-Id: 1e50a32d-4c51-4660-96ce-b7cbe294baa8
	I0821 11:26:13.243364 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:13.243506 2804799 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-994910","namespace":"kube-system","uid":"24d87a69-0a05-42d6-ba48-1d33fb7412be","resourceVersion":"425","creationTimestamp":"2023-08-21T11:25:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9b1925099c14da60e336ef1734e7725e","kubernetes.io/config.mirror":"9b1925099c14da60e336ef1734e7725e","kubernetes.io/config.seen":"2023-08-21T11:25:26.585338579Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0821 11:26:13.243980 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:13.243996 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:13.244004 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:13.244011 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:13.246118 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:13.246135 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:13.246143 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:13.246150 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:13.246157 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:13 GMT
	I0821 11:26:13.246163 2804799 round_trippers.go:580]     Audit-Id: fcdd2ae9-a915-44bd-acc4-34f1422451df
	I0821 11:26:13.246170 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:13.246176 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:13.246372 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"435","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0821 11:26:13.246768 2804799 pod_ready.go:92] pod "etcd-multinode-994910" in "kube-system" namespace has status "Ready":"True"
	I0821 11:26:13.246785 2804799 pod_ready.go:81] duration metric: took 5.967225ms waiting for pod "etcd-multinode-994910" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:13.246799 2804799 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-994910" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:13.246855 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-994910
	I0821 11:26:13.246864 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:13.246872 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:13.246879 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:13.249043 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:13.249059 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:13.249067 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:13 GMT
	I0821 11:26:13.249073 2804799 round_trippers.go:580]     Audit-Id: dd151643-740e-4eb1-b6be-403670a9d947
	I0821 11:26:13.249080 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:13.249086 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:13.249093 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:13.249099 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:13.249262 2804799 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-994910","namespace":"kube-system","uid":"41fedc8e-465b-4561-977c-624f45660c46","resourceVersion":"424","creationTimestamp":"2023-08-21T11:25:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"cde3bdd814a40f419d40c9c13bc7666b","kubernetes.io/config.mirror":"cde3bdd814a40f419d40c9c13bc7666b","kubernetes.io/config.seen":"2023-08-21T11:25:26.585339867Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0821 11:26:13.249851 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:13.249868 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:13.249895 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:13.249910 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:13.252096 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:13.252115 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:13.252123 2804799 round_trippers.go:580]     Audit-Id: 7f4927e9-879a-4480-b5ca-99f443da4544
	I0821 11:26:13.252130 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:13.252137 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:13.252143 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:13.252150 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:13.252156 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:13 GMT
	I0821 11:26:13.252261 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"435","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0821 11:26:13.252633 2804799 pod_ready.go:92] pod "kube-apiserver-multinode-994910" in "kube-system" namespace has status "Ready":"True"
	I0821 11:26:13.252643 2804799 pod_ready.go:81] duration metric: took 5.834551ms waiting for pod "kube-apiserver-multinode-994910" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:13.252653 2804799 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-994910" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:13.252745 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-994910
	I0821 11:26:13.252750 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:13.252757 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:13.252764 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:13.254951 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:13.254968 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:13.254977 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:13.254983 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:13.254990 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:13 GMT
	I0821 11:26:13.254997 2804799 round_trippers.go:580]     Audit-Id: 258b0e53-8b62-4f7a-b3aa-fc17c480d54d
	I0821 11:26:13.255003 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:13.255009 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:13.255155 2804799 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-994910","namespace":"kube-system","uid":"884f2285-c54f-4972-bdab-2e0f7a2bf63d","resourceVersion":"422","creationTimestamp":"2023-08-21T11:25:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4f6eec765e4666706050a72ce2877339","kubernetes.io/config.mirror":"4f6eec765e4666706050a72ce2877339","kubernetes.io/config.seen":"2023-08-21T11:25:26.585331506Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0821 11:26:13.312909 2804799 request.go:629] Waited for 57.216078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:13.312966 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:13.312971 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:13.312980 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:13.312986 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:13.315501 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:13.315562 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:13.315584 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:13.315607 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:13.315644 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:13.315670 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:13.315693 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:13 GMT
	I0821 11:26:13.315732 2804799 round_trippers.go:580]     Audit-Id: b6d28898-6f17-4ff1-9954-7e5a92e4d562
	I0821 11:26:13.315879 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"435","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0821 11:26:13.316269 2804799 pod_ready.go:92] pod "kube-controller-manager-multinode-994910" in "kube-system" namespace has status "Ready":"True"
	I0821 11:26:13.316285 2804799 pod_ready.go:81] duration metric: took 63.62496ms waiting for pod "kube-controller-manager-multinode-994910" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:13.316296 2804799 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-758dj" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:13.512720 2804799 request.go:629] Waited for 196.344605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-758dj
	I0821 11:26:13.512803 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-758dj
	I0821 11:26:13.512815 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:13.512825 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:13.512833 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:13.515433 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:13.515487 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:13.515523 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:13.515536 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:13 GMT
	I0821 11:26:13.515543 2804799 round_trippers.go:580]     Audit-Id: efaf69d3-dc1e-4cfd-90e9-1b595eb1e988
	I0821 11:26:13.515550 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:13.515566 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:13.515580 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:13.515701 2804799 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-758dj","generateName":"kube-proxy-","namespace":"kube-system","uid":"f2232edb-23d3-4789-86a0-9e3cd68aeea3","resourceVersion":"416","creationTimestamp":"2023-08-21T11:25:40Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"85e90316-63be-42e0-89ab-cb4dd52d7cf1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:25:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85e90316-63be-42e0-89ab-cb4dd52d7cf1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0821 11:26:13.712524 2804799 request.go:629] Waited for 196.324823ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:13.712600 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:13.712610 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:13.712619 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:13.712626 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:13.715090 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:13.715112 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:13.715121 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:13.715128 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:13.715134 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:13.715141 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:13 GMT
	I0821 11:26:13.715147 2804799 round_trippers.go:580]     Audit-Id: 36d6170f-7c22-48a4-8e63-b322201961b7
	I0821 11:26:13.715159 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:13.715437 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"435","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0821 11:26:13.715860 2804799 pod_ready.go:92] pod "kube-proxy-758dj" in "kube-system" namespace has status "Ready":"True"
	I0821 11:26:13.715877 2804799 pod_ready.go:81] duration metric: took 399.57156ms waiting for pod "kube-proxy-758dj" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:13.715888 2804799 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-994910" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:13.912214 2804799 request.go:629] Waited for 196.26313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-994910
	I0821 11:26:13.912343 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-994910
	I0821 11:26:13.912356 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:13.912366 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:13.912374 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:13.914942 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:13.914976 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:13.914986 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:13.914993 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:13.914999 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:13 GMT
	I0821 11:26:13.915010 2804799 round_trippers.go:580]     Audit-Id: a8b0b2b0-92f1-4c20-8ffc-9bf88a0c7c39
	I0821 11:26:13.915017 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:13.915024 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:13.915211 2804799 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-994910","namespace":"kube-system","uid":"6e91ba04-2902-4d40-ab3a-1c492a5faf72","resourceVersion":"423","creationTimestamp":"2023-08-21T11:25:25Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b581863c19058988eada7e36b412ebab","kubernetes.io/config.mirror":"b581863c19058988eada7e36b412ebab","kubernetes.io/config.seen":"2023-08-21T11:25:18.581136241Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0821 11:26:14.112989 2804799 request.go:629] Waited for 197.341706ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:14.113049 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:14.113055 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:14.113064 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:14.113075 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:14.115698 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:14.115767 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:14.115789 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:14.115811 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:14 GMT
	I0821 11:26:14.115843 2804799 round_trippers.go:580]     Audit-Id: 50b1f7ef-b414-40cb-b342-3fe4095cfd61
	I0821 11:26:14.115867 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:14.115889 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:14.115925 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:14.116371 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"435","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0821 11:26:14.116771 2804799 pod_ready.go:92] pod "kube-scheduler-multinode-994910" in "kube-system" namespace has status "Ready":"True"
	I0821 11:26:14.116788 2804799 pod_ready.go:81] duration metric: took 400.892806ms waiting for pod "kube-scheduler-multinode-994910" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:14.116801 2804799 pod_ready.go:38] duration metric: took 2.401638573s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 11:26:14.116817 2804799 api_server.go:52] waiting for apiserver process to appear ...
	I0821 11:26:14.116879 2804799 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 11:26:14.129650 2804799 command_runner.go:130] > 1232
	I0821 11:26:14.129715 2804799 api_server.go:72] duration metric: took 33.780727976s to wait for apiserver process to appear ...
	I0821 11:26:14.129729 2804799 api_server.go:88] waiting for apiserver healthz status ...
	I0821 11:26:14.129746 2804799 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0821 11:26:14.138625 2804799 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0821 11:26:14.138751 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0821 11:26:14.138766 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:14.138776 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:14.138783 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:14.139942 2804799 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0821 11:26:14.139963 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:14.139971 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:14.139978 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:14.139984 2804799 round_trippers.go:580]     Content-Length: 263
	I0821 11:26:14.139991 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:14 GMT
	I0821 11:26:14.140001 2804799 round_trippers.go:580]     Audit-Id: 8fcd8fd4-e0d3-486a-bf75-06730cfbf96a
	I0821 11:26:14.140011 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:14.140018 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:14.140034 2804799 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.4",
	  "gitCommit": "fa3d7990104d7c1f16943a67f11b154b71f6a132",
	  "gitTreeState": "clean",
	  "buildDate": "2023-07-19T12:14:49Z",
	  "goVersion": "go1.20.6",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I0821 11:26:14.140117 2804799 api_server.go:141] control plane version: v1.27.4
	I0821 11:26:14.140135 2804799 api_server.go:131] duration metric: took 10.400319ms to wait for apiserver health ...
	I0821 11:26:14.140143 2804799 system_pods.go:43] waiting for kube-system pods to appear ...
	I0821 11:26:14.312572 2804799 request.go:629] Waited for 172.333538ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0821 11:26:14.312643 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0821 11:26:14.312660 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:14.312673 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:14.312684 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:14.316642 2804799 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0821 11:26:14.316664 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:14.316672 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:14.316679 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:14.316685 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:14.316692 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:14.316698 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:14 GMT
	I0821 11:26:14.316705 2804799 round_trippers.go:580]     Audit-Id: 4d52e9ea-7edf-4be3-96b2-e6922e1798ad
	I0821 11:26:14.317470 2804799 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"459"},"items":[{"metadata":{"name":"coredns-5d78c9869d-zj5f8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"b6aeac2c-fd47-4855-8a60-675aa03078a6","resourceVersion":"454","creationTimestamp":"2023-08-21T11:25:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"6f5458c4-0287-4acb-a4c3-19fd45c7091a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:25:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f5458c4-0287-4acb-a4c3-19fd45c7091a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0821 11:26:14.320253 2804799 system_pods.go:59] 8 kube-system pods found
	I0821 11:26:14.320296 2804799 system_pods.go:61] "coredns-5d78c9869d-zj5f8" [b6aeac2c-fd47-4855-8a60-675aa03078a6] Running
	I0821 11:26:14.320309 2804799 system_pods.go:61] "etcd-multinode-994910" [24d87a69-0a05-42d6-ba48-1d33fb7412be] Running
	I0821 11:26:14.320354 2804799 system_pods.go:61] "kindnet-vmb94" [85d5ad45-2643-4c1a-898c-b92c6d4c313d] Running
	I0821 11:26:14.320369 2804799 system_pods.go:61] "kube-apiserver-multinode-994910" [41fedc8e-465b-4561-977c-624f45660c46] Running
	I0821 11:26:14.320377 2804799 system_pods.go:61] "kube-controller-manager-multinode-994910" [884f2285-c54f-4972-bdab-2e0f7a2bf63d] Running
	I0821 11:26:14.320381 2804799 system_pods.go:61] "kube-proxy-758dj" [f2232edb-23d3-4789-86a0-9e3cd68aeea3] Running
	I0821 11:26:14.320393 2804799 system_pods.go:61] "kube-scheduler-multinode-994910" [6e91ba04-2902-4d40-ab3a-1c492a5faf72] Running
	I0821 11:26:14.320403 2804799 system_pods.go:61] "storage-provisioner" [66ef6e75-74a3-4384-8e70-dccc09707589] Running
	I0821 11:26:14.320415 2804799 system_pods.go:74] duration metric: took 180.267698ms to wait for pod list to return data ...
	I0821 11:26:14.320426 2804799 default_sa.go:34] waiting for default service account to be created ...
	I0821 11:26:14.512861 2804799 request.go:629] Waited for 192.344742ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0821 11:26:14.512922 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0821 11:26:14.512932 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:14.512941 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:14.512949 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:14.515554 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:14.515582 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:14.515591 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:14.515607 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:14.515614 2804799 round_trippers.go:580]     Content-Length: 261
	I0821 11:26:14.515622 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:14 GMT
	I0821 11:26:14.515629 2804799 round_trippers.go:580]     Audit-Id: 99343d5e-0988-4117-8d7d-445a80d0f3b9
	I0821 11:26:14.515638 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:14.515645 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:14.515667 2804799 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"460"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"90352069-a756-4fa8-99df-5e9348a152a6","resourceVersion":"349","creationTimestamp":"2023-08-21T11:25:39Z"}}]}
	I0821 11:26:14.515869 2804799 default_sa.go:45] found service account: "default"
	I0821 11:26:14.515879 2804799 default_sa.go:55] duration metric: took 195.448238ms for default service account to be created ...
	I0821 11:26:14.515887 2804799 system_pods.go:116] waiting for k8s-apps to be running ...
	I0821 11:26:14.712264 2804799 request.go:629] Waited for 196.317948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0821 11:26:14.712342 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0821 11:26:14.712357 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:14.712367 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:14.712383 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:14.715671 2804799 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0821 11:26:14.715696 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:14.715705 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:14.715712 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:14 GMT
	I0821 11:26:14.715719 2804799 round_trippers.go:580]     Audit-Id: 14cd38b5-499b-4014-9d17-ba1fa40d493f
	I0821 11:26:14.715726 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:14.715733 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:14.715739 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:14.716639 2804799 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"461"},"items":[{"metadata":{"name":"coredns-5d78c9869d-zj5f8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"b6aeac2c-fd47-4855-8a60-675aa03078a6","resourceVersion":"454","creationTimestamp":"2023-08-21T11:25:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"6f5458c4-0287-4acb-a4c3-19fd45c7091a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:25:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f5458c4-0287-4acb-a4c3-19fd45c7091a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55613 chars]
	I0821 11:26:14.719049 2804799 system_pods.go:86] 8 kube-system pods found
	I0821 11:26:14.719073 2804799 system_pods.go:89] "coredns-5d78c9869d-zj5f8" [b6aeac2c-fd47-4855-8a60-675aa03078a6] Running
	I0821 11:26:14.719081 2804799 system_pods.go:89] "etcd-multinode-994910" [24d87a69-0a05-42d6-ba48-1d33fb7412be] Running
	I0821 11:26:14.719086 2804799 system_pods.go:89] "kindnet-vmb94" [85d5ad45-2643-4c1a-898c-b92c6d4c313d] Running
	I0821 11:26:14.719097 2804799 system_pods.go:89] "kube-apiserver-multinode-994910" [41fedc8e-465b-4561-977c-624f45660c46] Running
	I0821 11:26:14.719106 2804799 system_pods.go:89] "kube-controller-manager-multinode-994910" [884f2285-c54f-4972-bdab-2e0f7a2bf63d] Running
	I0821 11:26:14.719111 2804799 system_pods.go:89] "kube-proxy-758dj" [f2232edb-23d3-4789-86a0-9e3cd68aeea3] Running
	I0821 11:26:14.719119 2804799 system_pods.go:89] "kube-scheduler-multinode-994910" [6e91ba04-2902-4d40-ab3a-1c492a5faf72] Running
	I0821 11:26:14.719124 2804799 system_pods.go:89] "storage-provisioner" [66ef6e75-74a3-4384-8e70-dccc09707589] Running
	I0821 11:26:14.719133 2804799 system_pods.go:126] duration metric: took 203.241272ms to wait for k8s-apps to be running ...
	I0821 11:26:14.719141 2804799 system_svc.go:44] waiting for kubelet service to be running ....
	I0821 11:26:14.719200 2804799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 11:26:14.732258 2804799 system_svc.go:56] duration metric: took 13.107308ms WaitForService to wait for kubelet.
	I0821 11:26:14.732282 2804799 kubeadm.go:581] duration metric: took 34.383309872s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0821 11:26:14.732302 2804799 node_conditions.go:102] verifying NodePressure condition ...
	I0821 11:26:14.912691 2804799 request.go:629] Waited for 180.318118ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0821 11:26:14.912769 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0821 11:26:14.912778 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:14.912789 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:14.912797 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:14.915390 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:14.915412 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:14.915421 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:14.915428 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:14 GMT
	I0821 11:26:14.915436 2804799 round_trippers.go:580]     Audit-Id: fed4a38a-59dd-4125-8552-61d2abb80a01
	I0821 11:26:14.915443 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:14.915453 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:14.915461 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:14.915549 2804799 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"461"},"items":[{"metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"435","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I0821 11:26:14.915987 2804799 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0821 11:26:14.916011 2804799 node_conditions.go:123] node cpu capacity is 2
	I0821 11:26:14.916022 2804799 node_conditions.go:105] duration metric: took 183.715326ms to run NodePressure ...
	I0821 11:26:14.916040 2804799 start.go:228] waiting for startup goroutines ...
	I0821 11:26:14.916046 2804799 start.go:233] waiting for cluster config update ...
	I0821 11:26:14.916059 2804799 start.go:242] writing updated cluster config ...
	I0821 11:26:14.918940 2804799 out.go:177] 
	I0821 11:26:14.920766 2804799 config.go:182] Loaded profile config "multinode-994910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 11:26:14.920870 2804799 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/config.json ...
	I0821 11:26:14.923315 2804799 out.go:177] * Starting worker node multinode-994910-m02 in cluster multinode-994910
	I0821 11:26:14.925126 2804799 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 11:26:14.926772 2804799 out.go:177] * Pulling base image ...
	I0821 11:26:14.929261 2804799 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 11:26:14.929285 2804799 cache.go:57] Caching tarball of preloaded images
	I0821 11:26:14.929343 2804799 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0821 11:26:14.929376 2804799 preload.go:174] Found /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0821 11:26:14.929389 2804799 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.4 on crio
	I0821 11:26:14.929489 2804799 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/config.json ...
	I0821 11:26:14.946558 2804799 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0821 11:26:14.946587 2804799 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0821 11:26:14.946605 2804799 cache.go:195] Successfully downloaded all kic artifacts
	I0821 11:26:14.946634 2804799 start.go:365] acquiring machines lock for multinode-994910-m02: {Name:mk38ecf8131ec7bea21fa53242d80a5d8b3771ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:26:14.946759 2804799 start.go:369] acquired machines lock for "multinode-994910-m02" in 101.463µs
	I0821 11:26:14.946788 2804799 start.go:93] Provisioning new machine with config: &{Name:multinode-994910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-994910 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0821 11:26:14.946867 2804799 start.go:125] createHost starting for "m02" (driver="docker")
	I0821 11:26:14.950487 2804799 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0821 11:26:14.950596 2804799 start.go:159] libmachine.API.Create for "multinode-994910" (driver="docker")
	I0821 11:26:14.950620 2804799 client.go:168] LocalClient.Create starting
	I0821 11:26:14.950686 2804799 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem
	I0821 11:26:14.950725 2804799 main.go:141] libmachine: Decoding PEM data...
	I0821 11:26:14.950747 2804799 main.go:141] libmachine: Parsing certificate...
	I0821 11:26:14.950809 2804799 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem
	I0821 11:26:14.950831 2804799 main.go:141] libmachine: Decoding PEM data...
	I0821 11:26:14.950845 2804799 main.go:141] libmachine: Parsing certificate...
	I0821 11:26:14.951082 2804799 cli_runner.go:164] Run: docker network inspect multinode-994910 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0821 11:26:14.972190 2804799 network_create.go:76] Found existing network {name:multinode-994910 subnet:0x400168bec0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0821 11:26:14.972235 2804799 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-994910-m02" container
	I0821 11:26:14.972309 2804799 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0821 11:26:14.988501 2804799 cli_runner.go:164] Run: docker volume create multinode-994910-m02 --label name.minikube.sigs.k8s.io=multinode-994910-m02 --label created_by.minikube.sigs.k8s.io=true
	I0821 11:26:15.017031 2804799 oci.go:103] Successfully created a docker volume multinode-994910-m02
	I0821 11:26:15.017130 2804799 cli_runner.go:164] Run: docker run --rm --name multinode-994910-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-994910-m02 --entrypoint /usr/bin/test -v multinode-994910-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0821 11:26:15.593412 2804799 oci.go:107] Successfully prepared a docker volume multinode-994910-m02
	I0821 11:26:15.593449 2804799 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 11:26:15.593470 2804799 kic.go:190] Starting extracting preloaded images to volume ...
	I0821 11:26:15.593558 2804799 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-994910-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0821 11:26:19.653108 2804799 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-994910-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (4.059496587s)
	I0821 11:26:19.653139 2804799 kic.go:199] duration metric: took 4.059667 seconds to extract preloaded images to volume
	W0821 11:26:19.653271 2804799 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0821 11:26:19.653379 2804799 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0821 11:26:19.731032 2804799 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-994910-m02 --name multinode-994910-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-994910-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-994910-m02 --network multinode-994910 --ip 192.168.58.3 --volume multinode-994910-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0821 11:26:20.095921 2804799 cli_runner.go:164] Run: docker container inspect multinode-994910-m02 --format={{.State.Running}}
	I0821 11:26:20.120628 2804799 cli_runner.go:164] Run: docker container inspect multinode-994910-m02 --format={{.State.Status}}
	I0821 11:26:20.152465 2804799 cli_runner.go:164] Run: docker exec multinode-994910-m02 stat /var/lib/dpkg/alternatives/iptables
	I0821 11:26:20.255055 2804799 oci.go:144] the created container "multinode-994910-m02" has a running status.
	I0821 11:26:20.255082 2804799 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910-m02/id_rsa...
	I0821 11:26:21.251111 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0821 11:26:21.251161 2804799 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0821 11:26:21.299959 2804799 cli_runner.go:164] Run: docker container inspect multinode-994910-m02 --format={{.State.Status}}
	I0821 11:26:21.335059 2804799 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0821 11:26:21.335083 2804799 kic_runner.go:114] Args: [docker exec --privileged multinode-994910-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0821 11:26:21.422689 2804799 cli_runner.go:164] Run: docker container inspect multinode-994910-m02 --format={{.State.Status}}
	I0821 11:26:21.443606 2804799 machine.go:88] provisioning docker machine ...
	I0821 11:26:21.443639 2804799 ubuntu.go:169] provisioning hostname "multinode-994910-m02"
	I0821 11:26:21.443709 2804799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910-m02
	I0821 11:26:21.467054 2804799 main.go:141] libmachine: Using SSH client type: native
	I0821 11:26:21.467495 2804799 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36268 <nil> <nil>}
	I0821 11:26:21.467605 2804799 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-994910-m02 && echo "multinode-994910-m02" | sudo tee /etc/hostname
	I0821 11:26:21.629801 2804799 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-994910-m02
	
	I0821 11:26:21.629909 2804799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910-m02
	I0821 11:26:21.648544 2804799 main.go:141] libmachine: Using SSH client type: native
	I0821 11:26:21.648976 2804799 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36268 <nil> <nil>}
	I0821 11:26:21.648999 2804799 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-994910-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-994910-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-994910-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 11:26:21.775872 2804799 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 11:26:21.775900 2804799 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17102-2734539/.minikube CaCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17102-2734539/.minikube}
	I0821 11:26:21.775916 2804799 ubuntu.go:177] setting up certificates
	I0821 11:26:21.775924 2804799 provision.go:83] configureAuth start
	I0821 11:26:21.775984 2804799 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-994910-m02
	I0821 11:26:21.795771 2804799 provision.go:138] copyHostCerts
	I0821 11:26:21.795813 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem
	I0821 11:26:21.795845 2804799 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem, removing ...
	I0821 11:26:21.795855 2804799 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem
	I0821 11:26:21.795931 2804799 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem (1078 bytes)
	I0821 11:26:21.796009 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem
	I0821 11:26:21.796030 2804799 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem, removing ...
	I0821 11:26:21.796041 2804799 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem
	I0821 11:26:21.796066 2804799 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem (1123 bytes)
	I0821 11:26:21.796129 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem
	I0821 11:26:21.796150 2804799 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem, removing ...
	I0821 11:26:21.796158 2804799 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem
	I0821 11:26:21.796183 2804799 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem (1675 bytes)
	I0821 11:26:21.796235 2804799 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca-key.pem org=jenkins.multinode-994910-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-994910-m02]
	I0821 11:26:22.362517 2804799 provision.go:172] copyRemoteCerts
	I0821 11:26:22.362587 2804799 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 11:26:22.362634 2804799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910-m02
	I0821 11:26:22.383336 2804799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36268 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910-m02/id_rsa Username:docker}
	I0821 11:26:22.478588 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0821 11:26:22.478651 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 11:26:22.508415 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0821 11:26:22.508476 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0821 11:26:22.537477 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0821 11:26:22.537536 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0821 11:26:22.566112 2804799 provision.go:86] duration metric: configureAuth took 790.175126ms
	I0821 11:26:22.566141 2804799 ubuntu.go:193] setting minikube options for container-runtime
	I0821 11:26:22.566358 2804799 config.go:182] Loaded profile config "multinode-994910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 11:26:22.566466 2804799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910-m02
	I0821 11:26:22.583988 2804799 main.go:141] libmachine: Using SSH client type: native
	I0821 11:26:22.584424 2804799 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36268 <nil> <nil>}
	I0821 11:26:22.584438 2804799 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0821 11:26:22.833123 2804799 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0821 11:26:22.833182 2804799 machine.go:91] provisioned docker machine in 1.389555789s
	I0821 11:26:22.833204 2804799 client.go:171] LocalClient.Create took 7.882578095s
	I0821 11:26:22.833228 2804799 start.go:167] duration metric: libmachine.API.Create for "multinode-994910" took 7.882631575s
	I0821 11:26:22.833261 2804799 start.go:300] post-start starting for "multinode-994910-m02" (driver="docker")
	I0821 11:26:22.833291 2804799 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 11:26:22.833403 2804799 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 11:26:22.833481 2804799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910-m02
	I0821 11:26:22.851023 2804799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36268 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910-m02/id_rsa Username:docker}
	I0821 11:26:22.944818 2804799 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 11:26:22.948726 2804799 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.2 LTS"
	I0821 11:26:22.948744 2804799 command_runner.go:130] > NAME="Ubuntu"
	I0821 11:26:22.948751 2804799 command_runner.go:130] > VERSION_ID="22.04"
	I0821 11:26:22.948757 2804799 command_runner.go:130] > VERSION="22.04.2 LTS (Jammy Jellyfish)"
	I0821 11:26:22.948762 2804799 command_runner.go:130] > VERSION_CODENAME=jammy
	I0821 11:26:22.948767 2804799 command_runner.go:130] > ID=ubuntu
	I0821 11:26:22.948771 2804799 command_runner.go:130] > ID_LIKE=debian
	I0821 11:26:22.948776 2804799 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0821 11:26:22.948782 2804799 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0821 11:26:22.948789 2804799 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0821 11:26:22.948797 2804799 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0821 11:26:22.948820 2804799 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0821 11:26:22.949106 2804799 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0821 11:26:22.949141 2804799 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0821 11:26:22.949157 2804799 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0821 11:26:22.949163 2804799 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0821 11:26:22.949176 2804799 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-2734539/.minikube/addons for local assets ...
	I0821 11:26:22.949238 2804799 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-2734539/.minikube/files for local assets ...
	I0821 11:26:22.949320 2804799 filesync.go:149] local asset: /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem -> 27399302.pem in /etc/ssl/certs
	I0821 11:26:22.949331 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem -> /etc/ssl/certs/27399302.pem
	I0821 11:26:22.949431 2804799 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0821 11:26:22.959737 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem --> /etc/ssl/certs/27399302.pem (1708 bytes)
	I0821 11:26:22.988954 2804799 start.go:303] post-start completed in 155.658529ms
	I0821 11:26:22.989342 2804799 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-994910-m02
	I0821 11:26:23.007797 2804799 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/config.json ...
	I0821 11:26:23.008094 2804799 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 11:26:23.008150 2804799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910-m02
	I0821 11:26:23.026411 2804799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36268 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910-m02/id_rsa Username:docker}
	I0821 11:26:23.119900 2804799 command_runner.go:130] > 18%!
	(MISSING)I0821 11:26:23.120039 2804799 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0821 11:26:23.125863 2804799 command_runner.go:130] > 161G
	I0821 11:26:23.126291 2804799 start.go:128] duration metric: createHost completed in 8.179410837s
	I0821 11:26:23.126311 2804799 start.go:83] releasing machines lock for "multinode-994910-m02", held for 8.179540919s
	I0821 11:26:23.126386 2804799 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-994910-m02
	I0821 11:26:23.146732 2804799 out.go:177] * Found network options:
	I0821 11:26:23.148988 2804799 out.go:177]   - NO_PROXY=192.168.58.2
	W0821 11:26:23.151189 2804799 proxy.go:119] fail to check proxy env: Error ip not in block
	W0821 11:26:23.151242 2804799 proxy.go:119] fail to check proxy env: Error ip not in block
	I0821 11:26:23.151314 2804799 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0821 11:26:23.151356 2804799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910-m02
	I0821 11:26:23.151383 2804799 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 11:26:23.151438 2804799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910-m02
	I0821 11:26:23.178617 2804799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36268 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910-m02/id_rsa Username:docker}
	I0821 11:26:23.178836 2804799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36268 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910-m02/id_rsa Username:docker}
	I0821 11:26:23.409470 2804799 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0821 11:26:23.435423 2804799 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0821 11:26:23.440739 2804799 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0821 11:26:23.440763 2804799 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0821 11:26:23.440772 2804799 command_runner.go:130] > Device: b3h/179d	Inode: 5709935     Links: 1
	I0821 11:26:23.440779 2804799 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0821 11:26:23.440786 2804799 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0821 11:26:23.440792 2804799 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0821 11:26:23.440798 2804799 command_runner.go:130] > Change: 2023-08-21 11:02:38.566259907 +0000
	I0821 11:26:23.440808 2804799 command_runner.go:130] >  Birth: 2023-08-21 11:02:38.566259907 +0000
	I0821 11:26:23.441092 2804799 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:26:23.464141 2804799 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0821 11:26:23.464223 2804799 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:26:23.510703 2804799 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0821 11:26:23.510760 2804799 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0821 11:26:23.510768 2804799 start.go:466] detecting cgroup driver to use...
	I0821 11:26:23.510799 2804799 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0821 11:26:23.510848 2804799 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 11:26:23.530669 2804799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 11:26:23.544879 2804799 docker.go:196] disabling cri-docker service (if available) ...
	I0821 11:26:23.544999 2804799 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0821 11:26:23.561721 2804799 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0821 11:26:23.578154 2804799 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0821 11:26:23.681286 2804799 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0821 11:26:23.801969 2804799 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0821 11:26:23.802015 2804799 docker.go:212] disabling docker service ...
	I0821 11:26:23.802109 2804799 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0821 11:26:23.825220 2804799 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0821 11:26:23.839999 2804799 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0821 11:26:23.949260 2804799 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0821 11:26:23.949334 2804799 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0821 11:26:24.063576 2804799 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0821 11:26:24.063648 2804799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0821 11:26:24.077924 2804799 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 11:26:24.097422 2804799 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0821 11:26:24.098908 2804799 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0821 11:26:24.099014 2804799 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:26:24.111571 2804799 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0821 11:26:24.111665 2804799 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:26:24.124163 2804799 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:26:24.137666 2804799 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:26:24.150101 2804799 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0821 11:26:24.161603 2804799 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0821 11:26:24.170903 2804799 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0821 11:26:24.172025 2804799 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0821 11:26:24.182519 2804799 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0821 11:26:24.283086 2804799 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0821 11:26:24.411510 2804799 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0821 11:26:24.411620 2804799 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0821 11:26:24.416677 2804799 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0821 11:26:24.416734 2804799 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0821 11:26:24.416747 2804799 command_runner.go:130] > Device: bch/188d	Inode: 186         Links: 1
	I0821 11:26:24.416756 2804799 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0821 11:26:24.416763 2804799 command_runner.go:130] > Access: 2023-08-21 11:26:24.385895174 +0000
	I0821 11:26:24.416772 2804799 command_runner.go:130] > Modify: 2023-08-21 11:26:24.385895174 +0000
	I0821 11:26:24.416782 2804799 command_runner.go:130] > Change: 2023-08-21 11:26:24.385895174 +0000
	I0821 11:26:24.416787 2804799 command_runner.go:130] >  Birth: -
	I0821 11:26:24.416818 2804799 start.go:534] Will wait 60s for crictl version
	I0821 11:26:24.416877 2804799 ssh_runner.go:195] Run: which crictl
	I0821 11:26:24.421103 2804799 command_runner.go:130] > /usr/bin/crictl
	I0821 11:26:24.421168 2804799 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0821 11:26:24.461664 2804799 command_runner.go:130] > Version:  0.1.0
	I0821 11:26:24.461744 2804799 command_runner.go:130] > RuntimeName:  cri-o
	I0821 11:26:24.461764 2804799 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0821 11:26:24.461796 2804799 command_runner.go:130] > RuntimeApiVersion:  v1
	I0821 11:26:24.464421 2804799 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0821 11:26:24.464507 2804799 ssh_runner.go:195] Run: crio --version
	I0821 11:26:24.517557 2804799 command_runner.go:130] > crio version 1.24.6
	I0821 11:26:24.517584 2804799 command_runner.go:130] > Version:          1.24.6
	I0821 11:26:24.517593 2804799 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0821 11:26:24.517599 2804799 command_runner.go:130] > GitTreeState:     clean
	I0821 11:26:24.517606 2804799 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0821 11:26:24.517612 2804799 command_runner.go:130] > GoVersion:        go1.18.2
	I0821 11:26:24.517617 2804799 command_runner.go:130] > Compiler:         gc
	I0821 11:26:24.517624 2804799 command_runner.go:130] > Platform:         linux/arm64
	I0821 11:26:24.517634 2804799 command_runner.go:130] > Linkmode:         dynamic
	I0821 11:26:24.517644 2804799 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0821 11:26:24.517652 2804799 command_runner.go:130] > SeccompEnabled:   true
	I0821 11:26:24.517657 2804799 command_runner.go:130] > AppArmorEnabled:  false
	I0821 11:26:24.517735 2804799 ssh_runner.go:195] Run: crio --version
	I0821 11:26:24.563304 2804799 command_runner.go:130] > crio version 1.24.6
	I0821 11:26:24.563328 2804799 command_runner.go:130] > Version:          1.24.6
	I0821 11:26:24.563338 2804799 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0821 11:26:24.563343 2804799 command_runner.go:130] > GitTreeState:     clean
	I0821 11:26:24.563350 2804799 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0821 11:26:24.563355 2804799 command_runner.go:130] > GoVersion:        go1.18.2
	I0821 11:26:24.563360 2804799 command_runner.go:130] > Compiler:         gc
	I0821 11:26:24.563366 2804799 command_runner.go:130] > Platform:         linux/arm64
	I0821 11:26:24.563376 2804799 command_runner.go:130] > Linkmode:         dynamic
	I0821 11:26:24.563386 2804799 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0821 11:26:24.563395 2804799 command_runner.go:130] > SeccompEnabled:   true
	I0821 11:26:24.563400 2804799 command_runner.go:130] > AppArmorEnabled:  false
	I0821 11:26:24.565677 2804799 out.go:177] * Preparing Kubernetes v1.27.4 on CRI-O 1.24.6 ...
	I0821 11:26:24.567703 2804799 out.go:177]   - env NO_PROXY=192.168.58.2
	I0821 11:26:24.569644 2804799 cli_runner.go:164] Run: docker network inspect multinode-994910 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0821 11:26:24.586746 2804799 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0821 11:26:24.591219 2804799 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 11:26:24.604617 2804799 certs.go:56] Setting up /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910 for IP: 192.168.58.3
	I0821 11:26:24.604647 2804799 certs.go:190] acquiring lock for shared ca certs: {Name:mkf22db11ef8c10db9220127fbe1c5ce3b246b6b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:26:24.604779 2804799 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.key
	I0821 11:26:24.604843 2804799 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.key
	I0821 11:26:24.604854 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0821 11:26:24.604869 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0821 11:26:24.604881 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0821 11:26:24.604891 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0821 11:26:24.604939 2804799 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/2739930.pem (1338 bytes)
	W0821 11:26:24.604976 2804799 certs.go:433] ignoring /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/2739930_empty.pem, impossibly tiny 0 bytes
	I0821 11:26:24.604985 2804799 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca-key.pem (1679 bytes)
	I0821 11:26:24.605010 2804799 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem (1078 bytes)
	I0821 11:26:24.605037 2804799 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem (1123 bytes)
	I0821 11:26:24.605062 2804799 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem (1675 bytes)
	I0821 11:26:24.605105 2804799 certs.go:437] found cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem (1708 bytes)
	I0821 11:26:24.605134 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0821 11:26:24.605145 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/2739930.pem -> /usr/share/ca-certificates/2739930.pem
	I0821 11:26:24.605155 2804799 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem -> /usr/share/ca-certificates/27399302.pem
	I0821 11:26:24.605558 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0821 11:26:24.634518 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0821 11:26:24.663657 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0821 11:26:24.691924 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0821 11:26:24.722572 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0821 11:26:24.752215 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/2739930.pem --> /usr/share/ca-certificates/2739930.pem (1338 bytes)
	I0821 11:26:24.781202 2804799 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem --> /usr/share/ca-certificates/27399302.pem (1708 bytes)
	I0821 11:26:24.809838 2804799 ssh_runner.go:195] Run: openssl version
	I0821 11:26:24.816341 2804799 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0821 11:26:24.816818 2804799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0821 11:26:24.828261 2804799 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0821 11:26:24.832737 2804799 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug 21 11:03 /usr/share/ca-certificates/minikubeCA.pem
	I0821 11:26:24.833024 2804799 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Aug 21 11:03 /usr/share/ca-certificates/minikubeCA.pem
	I0821 11:26:24.833105 2804799 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0821 11:26:24.841114 2804799 command_runner.go:130] > b5213941
	I0821 11:26:24.841588 2804799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0821 11:26:24.853227 2804799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2739930.pem && ln -fs /usr/share/ca-certificates/2739930.pem /etc/ssl/certs/2739930.pem"
	I0821 11:26:24.864631 2804799 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2739930.pem
	I0821 11:26:24.869014 2804799 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug 21 11:09 /usr/share/ca-certificates/2739930.pem
	I0821 11:26:24.869228 2804799 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Aug 21 11:09 /usr/share/ca-certificates/2739930.pem
	I0821 11:26:24.869309 2804799 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2739930.pem
	I0821 11:26:24.877611 2804799 command_runner.go:130] > 51391683
	I0821 11:26:24.878045 2804799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2739930.pem /etc/ssl/certs/51391683.0"
	I0821 11:26:24.889143 2804799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/27399302.pem && ln -fs /usr/share/ca-certificates/27399302.pem /etc/ssl/certs/27399302.pem"
	I0821 11:26:24.901007 2804799 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/27399302.pem
	I0821 11:26:24.905216 2804799 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug 21 11:09 /usr/share/ca-certificates/27399302.pem
	I0821 11:26:24.905476 2804799 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Aug 21 11:09 /usr/share/ca-certificates/27399302.pem
	I0821 11:26:24.905532 2804799 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/27399302.pem
	I0821 11:26:24.913312 2804799 command_runner.go:130] > 3ec20f2e
	I0821 11:26:24.913753 2804799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/27399302.pem /etc/ssl/certs/3ec20f2e.0"
	I0821 11:26:24.925157 2804799 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0821 11:26:24.929286 2804799 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0821 11:26:24.929318 2804799 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0821 11:26:24.929413 2804799 ssh_runner.go:195] Run: crio config
	I0821 11:26:24.983092 2804799 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0821 11:26:24.983119 2804799 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0821 11:26:24.983128 2804799 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0821 11:26:24.983132 2804799 command_runner.go:130] > #
	I0821 11:26:24.983140 2804799 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0821 11:26:24.983148 2804799 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0821 11:26:24.983159 2804799 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0821 11:26:24.983169 2804799 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0821 11:26:24.983181 2804799 command_runner.go:130] > # reload'.
	I0821 11:26:24.983189 2804799 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0821 11:26:24.983197 2804799 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0821 11:26:24.983207 2804799 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0821 11:26:24.983214 2804799 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0821 11:26:24.983218 2804799 command_runner.go:130] > [crio]
	I0821 11:26:24.983228 2804799 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0821 11:26:24.983234 2804799 command_runner.go:130] > # containers images, in this directory.
	I0821 11:26:24.983842 2804799 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0821 11:26:24.983894 2804799 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0821 11:26:24.984289 2804799 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0821 11:26:24.984325 2804799 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0821 11:26:24.984354 2804799 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0821 11:26:24.984745 2804799 command_runner.go:130] > # storage_driver = "vfs"
	I0821 11:26:24.984781 2804799 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0821 11:26:24.984810 2804799 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0821 11:26:24.984925 2804799 command_runner.go:130] > # storage_option = [
	I0821 11:26:24.985113 2804799 command_runner.go:130] > # ]
	I0821 11:26:24.985157 2804799 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0821 11:26:24.985178 2804799 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0821 11:26:24.985567 2804799 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0821 11:26:24.985617 2804799 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0821 11:26:24.985643 2804799 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0821 11:26:24.985677 2804799 command_runner.go:130] > # always happen on a node reboot
	I0821 11:26:24.986047 2804799 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0821 11:26:24.986095 2804799 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0821 11:26:24.986116 2804799 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0821 11:26:24.986139 2804799 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0821 11:26:24.986490 2804799 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0821 11:26:24.986542 2804799 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0821 11:26:24.986566 2804799 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0821 11:26:24.986737 2804799 command_runner.go:130] > # internal_wipe = true
	I0821 11:26:24.986778 2804799 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0821 11:26:24.986806 2804799 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0821 11:26:24.986830 2804799 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0821 11:26:24.986959 2804799 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0821 11:26:24.986996 2804799 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0821 11:26:24.987039 2804799 command_runner.go:130] > [crio.api]
	I0821 11:26:24.987064 2804799 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0821 11:26:24.987429 2804799 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0821 11:26:24.987484 2804799 command_runner.go:130] > # IP address on which the stream server will listen.
	I0821 11:26:24.987805 2804799 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0821 11:26:24.987818 2804799 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0821 11:26:24.987824 2804799 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0821 11:26:24.988233 2804799 command_runner.go:130] > # stream_port = "0"
	I0821 11:26:24.988293 2804799 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0821 11:26:24.988787 2804799 command_runner.go:130] > # stream_enable_tls = false
	I0821 11:26:24.988822 2804799 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0821 11:26:24.989132 2804799 command_runner.go:130] > # stream_idle_timeout = ""
	I0821 11:26:24.989169 2804799 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0821 11:26:24.989192 2804799 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0821 11:26:24.989209 2804799 command_runner.go:130] > # minutes.
	I0821 11:26:24.989433 2804799 command_runner.go:130] > # stream_tls_cert = ""
	I0821 11:26:24.989511 2804799 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0821 11:26:24.989623 2804799 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0821 11:26:24.989849 2804799 command_runner.go:130] > # stream_tls_key = ""
	I0821 11:26:24.989935 2804799 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0821 11:26:24.990042 2804799 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0821 11:26:24.990144 2804799 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0821 11:26:24.990216 2804799 command_runner.go:130] > # stream_tls_ca = ""
	I0821 11:26:24.990244 2804799 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0821 11:26:24.990615 2804799 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0821 11:26:24.990741 2804799 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0821 11:26:24.990889 2804799 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0821 11:26:24.991067 2804799 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0821 11:26:24.991264 2804799 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0821 11:26:24.991365 2804799 command_runner.go:130] > [crio.runtime]
	I0821 11:26:24.991508 2804799 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0821 11:26:24.991555 2804799 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0821 11:26:24.991646 2804799 command_runner.go:130] > # "nofile=1024:2048"
	I0821 11:26:24.991668 2804799 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0821 11:26:24.991817 2804799 command_runner.go:130] > # default_ulimits = [
	I0821 11:26:24.991902 2804799 command_runner.go:130] > # ]
	I0821 11:26:24.991984 2804799 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0821 11:26:24.992003 2804799 command_runner.go:130] > # no_pivot = false
	I0821 11:26:24.992083 2804799 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0821 11:26:24.992138 2804799 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0821 11:26:24.992677 2804799 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0821 11:26:24.992721 2804799 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0821 11:26:24.992742 2804799 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0821 11:26:24.992811 2804799 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0821 11:26:24.992835 2804799 command_runner.go:130] > # conmon = ""
	I0821 11:26:24.992855 2804799 command_runner.go:130] > # Cgroup setting for conmon
	I0821 11:26:24.992893 2804799 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0821 11:26:24.992916 2804799 command_runner.go:130] > conmon_cgroup = "pod"
	I0821 11:26:24.992937 2804799 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0821 11:26:24.992971 2804799 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0821 11:26:24.992998 2804799 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0821 11:26:24.993017 2804799 command_runner.go:130] > # conmon_env = [
	I0821 11:26:24.993053 2804799 command_runner.go:130] > # ]
	I0821 11:26:24.993077 2804799 command_runner.go:130] > # Additional environment variables to set for all the
	I0821 11:26:24.993097 2804799 command_runner.go:130] > # containers. These are overridden if set in the
	I0821 11:26:24.993130 2804799 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0821 11:26:24.993151 2804799 command_runner.go:130] > # default_env = [
	I0821 11:26:24.993169 2804799 command_runner.go:130] > # ]
	I0821 11:26:24.993189 2804799 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0821 11:26:24.993221 2804799 command_runner.go:130] > # selinux = false
	I0821 11:26:24.993246 2804799 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0821 11:26:24.993267 2804799 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0821 11:26:24.993302 2804799 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0821 11:26:24.993327 2804799 command_runner.go:130] > # seccomp_profile = ""
	I0821 11:26:24.993348 2804799 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0821 11:26:24.993384 2804799 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0821 11:26:24.993408 2804799 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0821 11:26:24.993427 2804799 command_runner.go:130] > # which might increase security.
	I0821 11:26:24.993462 2804799 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0821 11:26:24.993487 2804799 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0821 11:26:24.993510 2804799 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0821 11:26:24.993545 2804799 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0821 11:26:24.993570 2804799 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0821 11:26:24.993601 2804799 command_runner.go:130] > # This option supports live configuration reload.
	I0821 11:26:24.993632 2804799 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0821 11:26:24.993656 2804799 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0821 11:26:24.993676 2804799 command_runner.go:130] > # the cgroup blockio controller.
	I0821 11:26:24.993709 2804799 command_runner.go:130] > # blockio_config_file = ""
	I0821 11:26:24.993734 2804799 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0821 11:26:24.993752 2804799 command_runner.go:130] > # irqbalance daemon.
	I0821 11:26:24.993786 2804799 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0821 11:26:24.993810 2804799 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0821 11:26:24.993830 2804799 command_runner.go:130] > # This option supports live configuration reload.
	I0821 11:26:24.993862 2804799 command_runner.go:130] > # rdt_config_file = ""
	I0821 11:26:24.993895 2804799 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0821 11:26:24.993915 2804799 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0821 11:26:24.993949 2804799 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0821 11:26:24.994067 2804799 command_runner.go:130] > # separate_pull_cgroup = ""
	I0821 11:26:24.994106 2804799 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0821 11:26:24.994125 2804799 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0821 11:26:24.994143 2804799 command_runner.go:130] > # will be added.
	I0821 11:26:24.994176 2804799 command_runner.go:130] > # default_capabilities = [
	I0821 11:26:24.994202 2804799 command_runner.go:130] > # 	"CHOWN",
	I0821 11:26:24.994222 2804799 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0821 11:26:24.994241 2804799 command_runner.go:130] > # 	"FSETID",
	I0821 11:26:24.994278 2804799 command_runner.go:130] > # 	"FOWNER",
	I0821 11:26:24.994298 2804799 command_runner.go:130] > # 	"SETGID",
	I0821 11:26:24.994314 2804799 command_runner.go:130] > # 	"SETUID",
	I0821 11:26:24.994332 2804799 command_runner.go:130] > # 	"SETPCAP",
	I0821 11:26:24.994351 2804799 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0821 11:26:24.994378 2804799 command_runner.go:130] > # 	"KILL",
	I0821 11:26:24.994458 2804799 command_runner.go:130] > # ]
	I0821 11:26:24.994499 2804799 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0821 11:26:24.994524 2804799 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0821 11:26:24.994580 2804799 command_runner.go:130] > # add_inheritable_capabilities = true
	I0821 11:26:24.994611 2804799 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0821 11:26:24.994635 2804799 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0821 11:26:24.994684 2804799 command_runner.go:130] > # default_sysctls = [
	I0821 11:26:24.994916 2804799 command_runner.go:130] > # ]
	I0821 11:26:24.994946 2804799 command_runner.go:130] > # List of devices on the host that a
	I0821 11:26:24.994976 2804799 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0821 11:26:24.995153 2804799 command_runner.go:130] > # allowed_devices = [
	I0821 11:26:24.995424 2804799 command_runner.go:130] > # 	"/dev/fuse",
	I0821 11:26:24.995657 2804799 command_runner.go:130] > # ]
	I0821 11:26:24.995698 2804799 command_runner.go:130] > # List of additional devices. specified as
	I0821 11:26:24.995729 2804799 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0821 11:26:24.995749 2804799 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0821 11:26:24.995780 2804799 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0821 11:26:24.995857 2804799 command_runner.go:130] > # additional_devices = [
	I0821 11:26:24.996156 2804799 command_runner.go:130] > # ]
	I0821 11:26:24.996186 2804799 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0821 11:26:24.996405 2804799 command_runner.go:130] > # cdi_spec_dirs = [
	I0821 11:26:24.996673 2804799 command_runner.go:130] > # 	"/etc/cdi",
	I0821 11:26:24.996938 2804799 command_runner.go:130] > # 	"/var/run/cdi",
	I0821 11:26:24.997214 2804799 command_runner.go:130] > # ]
	I0821 11:26:24.997244 2804799 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0821 11:26:24.997273 2804799 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0821 11:26:24.997292 2804799 command_runner.go:130] > # Defaults to false.
	I0821 11:26:24.997712 2804799 command_runner.go:130] > # device_ownership_from_security_context = false
	I0821 11:26:24.997758 2804799 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0821 11:26:24.997778 2804799 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0821 11:26:24.997976 2804799 command_runner.go:130] > # hooks_dir = [
	I0821 11:26:24.998428 2804799 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0821 11:26:24.998458 2804799 command_runner.go:130] > # ]
	I0821 11:26:24.998496 2804799 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0821 11:26:24.998519 2804799 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0821 11:26:24.998553 2804799 command_runner.go:130] > # its default mounts from the following two files:
	I0821 11:26:24.998582 2804799 command_runner.go:130] > #
	I0821 11:26:24.998607 2804799 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0821 11:26:24.998630 2804799 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0821 11:26:24.998752 2804799 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0821 11:26:24.998801 2804799 command_runner.go:130] > #
	I0821 11:26:24.998823 2804799 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0821 11:26:24.998876 2804799 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0821 11:26:24.998901 2804799 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0821 11:26:24.998922 2804799 command_runner.go:130] > #      only add mounts it finds in this file.
	I0821 11:26:24.998950 2804799 command_runner.go:130] > #
	I0821 11:26:24.998972 2804799 command_runner.go:130] > # default_mounts_file = ""
	I0821 11:26:24.999013 2804799 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0821 11:26:24.999051 2804799 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0821 11:26:24.999070 2804799 command_runner.go:130] > # pids_limit = 0
	I0821 11:26:24.999129 2804799 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0821 11:26:24.999169 2804799 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0821 11:26:24.999200 2804799 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0821 11:26:24.999227 2804799 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0821 11:26:24.999285 2804799 command_runner.go:130] > # log_size_max = -1
	I0821 11:26:24.999328 2804799 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0821 11:26:24.999361 2804799 command_runner.go:130] > # log_to_journald = false
	I0821 11:26:24.999387 2804799 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0821 11:26:24.999406 2804799 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0821 11:26:24.999466 2804799 command_runner.go:130] > # Path to directory for container attach sockets.
	I0821 11:26:24.999494 2804799 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0821 11:26:24.999527 2804799 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0821 11:26:24.999550 2804799 command_runner.go:130] > # bind_mount_prefix = ""
	I0821 11:26:24.999624 2804799 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0821 11:26:24.999687 2804799 command_runner.go:130] > # read_only = false
	I0821 11:26:24.999712 2804799 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0821 11:26:24.999733 2804799 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0821 11:26:24.999767 2804799 command_runner.go:130] > # live configuration reload.
	I0821 11:26:24.999790 2804799 command_runner.go:130] > # log_level = "info"
	I0821 11:26:24.999847 2804799 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0821 11:26:24.999872 2804799 command_runner.go:130] > # This option supports live configuration reload.
	I0821 11:26:24.999888 2804799 command_runner.go:130] > # log_filter = ""
	I0821 11:26:24.999951 2804799 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0821 11:26:24.999977 2804799 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0821 11:26:25.000017 2804799 command_runner.go:130] > # separated by comma.
	I0821 11:26:25.000035 2804799 command_runner.go:130] > # uid_mappings = ""
	I0821 11:26:25.000093 2804799 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0821 11:26:25.000119 2804799 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0821 11:26:25.000162 2804799 command_runner.go:130] > # separated by comma.
	I0821 11:26:25.000185 2804799 command_runner.go:130] > # gid_mappings = ""
	I0821 11:26:25.000204 2804799 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0821 11:26:25.000237 2804799 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0821 11:26:25.000265 2804799 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0821 11:26:25.000351 2804799 command_runner.go:130] > # minimum_mappable_uid = -1
	I0821 11:26:25.000401 2804799 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0821 11:26:25.000422 2804799 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0821 11:26:25.000455 2804799 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0821 11:26:25.000475 2804799 command_runner.go:130] > # minimum_mappable_gid = -1
	I0821 11:26:25.000495 2804799 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0821 11:26:25.000516 2804799 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0821 11:26:25.000552 2804799 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0821 11:26:25.000576 2804799 command_runner.go:130] > # ctr_stop_timeout = 30
	I0821 11:26:25.000599 2804799 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0821 11:26:25.000634 2804799 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0821 11:26:25.000659 2804799 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0821 11:26:25.000681 2804799 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0821 11:26:25.000750 2804799 command_runner.go:130] > # drop_infra_ctr = true
	I0821 11:26:25.000803 2804799 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0821 11:26:25.000823 2804799 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0821 11:26:25.000880 2804799 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0821 11:26:25.000903 2804799 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0821 11:26:25.000951 2804799 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0821 11:26:25.000974 2804799 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0821 11:26:25.000992 2804799 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0821 11:26:25.001049 2804799 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0821 11:26:25.001071 2804799 command_runner.go:130] > # pinns_path = ""
	I0821 11:26:25.001118 2804799 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0821 11:26:25.001143 2804799 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0821 11:26:25.001200 2804799 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0821 11:26:25.001223 2804799 command_runner.go:130] > # default_runtime = "runc"
	I0821 11:26:25.001266 2804799 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0821 11:26:25.001295 2804799 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0821 11:26:25.001320 2804799 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0821 11:26:25.001371 2804799 command_runner.go:130] > # creation as a file is not desired either.
	I0821 11:26:25.001426 2804799 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0821 11:26:25.001449 2804799 command_runner.go:130] > # the hostname is being managed dynamically.
	I0821 11:26:25.001469 2804799 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0821 11:26:25.001485 2804799 command_runner.go:130] > # ]
	I0821 11:26:25.001518 2804799 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0821 11:26:25.001568 2804799 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0821 11:26:25.001624 2804799 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0821 11:26:25.001650 2804799 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0821 11:26:25.001665 2804799 command_runner.go:130] > #
	I0821 11:26:25.001722 2804799 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0821 11:26:25.001747 2804799 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0821 11:26:25.001778 2804799 command_runner.go:130] > #  runtime_type = "oci"
	I0821 11:26:25.001802 2804799 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0821 11:26:25.001834 2804799 command_runner.go:130] > #  privileged_without_host_devices = false
	I0821 11:26:25.001947 2804799 command_runner.go:130] > #  allowed_annotations = []
	I0821 11:26:25.001972 2804799 command_runner.go:130] > # Where:
	I0821 11:26:25.001995 2804799 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0821 11:26:25.002057 2804799 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0821 11:26:25.002088 2804799 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0821 11:26:25.002122 2804799 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0821 11:26:25.002144 2804799 command_runner.go:130] > #   in $PATH.
	I0821 11:26:25.002181 2804799 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0821 11:26:25.002216 2804799 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0821 11:26:25.002238 2804799 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0821 11:26:25.002254 2804799 command_runner.go:130] > #   state.
	I0821 11:26:25.002316 2804799 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0821 11:26:25.002479 2804799 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0821 11:26:25.002505 2804799 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0821 11:26:25.002527 2804799 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0821 11:26:25.002562 2804799 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0821 11:26:25.002588 2804799 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0821 11:26:25.002629 2804799 command_runner.go:130] > #   The currently recognized values are:
	I0821 11:26:25.002664 2804799 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0821 11:26:25.002691 2804799 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0821 11:26:25.002714 2804799 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0821 11:26:25.002737 2804799 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0821 11:26:25.002774 2804799 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0821 11:26:25.002803 2804799 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0821 11:26:25.002826 2804799 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0821 11:26:25.002849 2804799 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0821 11:26:25.002879 2804799 command_runner.go:130] > #   should be moved to the container's cgroup
	I0821 11:26:25.002902 2804799 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0821 11:26:25.002923 2804799 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0821 11:26:25.002941 2804799 command_runner.go:130] > runtime_type = "oci"
	I0821 11:26:25.002961 2804799 command_runner.go:130] > runtime_root = "/run/runc"
	I0821 11:26:25.002993 2804799 command_runner.go:130] > runtime_config_path = ""
	I0821 11:26:25.003017 2804799 command_runner.go:130] > monitor_path = ""
	I0821 11:26:25.003038 2804799 command_runner.go:130] > monitor_cgroup = ""
	I0821 11:26:25.003058 2804799 command_runner.go:130] > monitor_exec_cgroup = ""
	I0821 11:26:25.003112 2804799 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0821 11:26:25.003136 2804799 command_runner.go:130] > # running containers
	I0821 11:26:25.003156 2804799 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0821 11:26:25.003180 2804799 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0821 11:26:25.003217 2804799 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0821 11:26:25.003243 2804799 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0821 11:26:25.003263 2804799 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0821 11:26:25.003284 2804799 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0821 11:26:25.003320 2804799 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0821 11:26:25.003342 2804799 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0821 11:26:25.003361 2804799 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0821 11:26:25.003384 2804799 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0821 11:26:25.003420 2804799 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0821 11:26:25.003443 2804799 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0821 11:26:25.003466 2804799 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0821 11:26:25.003490 2804799 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0821 11:26:25.003525 2804799 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0821 11:26:25.003549 2804799 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0821 11:26:25.003575 2804799 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0821 11:26:25.003600 2804799 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0821 11:26:25.003632 2804799 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0821 11:26:25.003663 2804799 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0821 11:26:25.003682 2804799 command_runner.go:130] > # Example:
	I0821 11:26:25.003702 2804799 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0821 11:26:25.003734 2804799 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0821 11:26:25.003756 2804799 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0821 11:26:25.003778 2804799 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0821 11:26:25.003798 2804799 command_runner.go:130] > # cpuset = 0
	I0821 11:26:25.003831 2804799 command_runner.go:130] > # cpushares = "0-1"
	I0821 11:26:25.003850 2804799 command_runner.go:130] > # Where:
	I0821 11:26:25.003870 2804799 command_runner.go:130] > # The workload name is workload-type.
	I0821 11:26:25.003894 2804799 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0821 11:26:25.003926 2804799 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0821 11:26:25.003950 2804799 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0821 11:26:25.004174 2804799 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0821 11:26:25.004208 2804799 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0821 11:26:25.004226 2804799 command_runner.go:130] > # 
	I0821 11:26:25.004249 2804799 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0821 11:26:25.004280 2804799 command_runner.go:130] > #
	I0821 11:26:25.004303 2804799 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0821 11:26:25.004324 2804799 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0821 11:26:25.004352 2804799 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0821 11:26:25.004386 2804799 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0821 11:26:25.004414 2804799 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0821 11:26:25.004436 2804799 command_runner.go:130] > [crio.image]
	I0821 11:26:25.004459 2804799 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0821 11:26:25.004490 2804799 command_runner.go:130] > # default_transport = "docker://"
	I0821 11:26:25.004513 2804799 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0821 11:26:25.004535 2804799 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0821 11:26:25.004557 2804799 command_runner.go:130] > # global_auth_file = ""
	I0821 11:26:25.004588 2804799 command_runner.go:130] > # The image used to instantiate infra containers.
	I0821 11:26:25.004612 2804799 command_runner.go:130] > # This option supports live configuration reload.
	I0821 11:26:25.004744 2804799 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0821 11:26:25.004769 2804799 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0821 11:26:25.004974 2804799 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0821 11:26:25.005006 2804799 command_runner.go:130] > # This option supports live configuration reload.
	I0821 11:26:25.005026 2804799 command_runner.go:130] > # pause_image_auth_file = ""
	I0821 11:26:25.005048 2804799 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0821 11:26:25.005082 2804799 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0821 11:26:25.005106 2804799 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0821 11:26:25.005127 2804799 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0821 11:26:25.005147 2804799 command_runner.go:130] > # pause_command = "/pause"
	I0821 11:26:25.005181 2804799 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0821 11:26:25.005207 2804799 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0821 11:26:25.005229 2804799 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0821 11:26:25.005253 2804799 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0821 11:26:25.005287 2804799 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0821 11:26:25.005315 2804799 command_runner.go:130] > # signature_policy = ""
	I0821 11:26:25.005341 2804799 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0821 11:26:25.005367 2804799 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0821 11:26:25.005400 2804799 command_runner.go:130] > # changing them here.
	I0821 11:26:25.005431 2804799 command_runner.go:130] > # insecure_registries = [
	I0821 11:26:25.005452 2804799 command_runner.go:130] > # ]
	I0821 11:26:25.005475 2804799 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0821 11:26:25.005513 2804799 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0821 11:26:25.005538 2804799 command_runner.go:130] > # image_volumes = "mkdir"
	I0821 11:26:25.005561 2804799 command_runner.go:130] > # Temporary directory to use for storing big files
	I0821 11:26:25.005598 2804799 command_runner.go:130] > # big_files_temporary_dir = ""
	I0821 11:26:25.005637 2804799 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0821 11:26:25.005656 2804799 command_runner.go:130] > # CNI plugins.
	I0821 11:26:25.005677 2804799 command_runner.go:130] > [crio.network]
	I0821 11:26:25.005714 2804799 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0821 11:26:25.005739 2804799 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0821 11:26:25.005760 2804799 command_runner.go:130] > # cni_default_network = ""
	I0821 11:26:25.005784 2804799 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0821 11:26:25.005818 2804799 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0821 11:26:25.005842 2804799 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0821 11:26:25.005862 2804799 command_runner.go:130] > # plugin_dirs = [
	I0821 11:26:25.005911 2804799 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0821 11:26:25.005939 2804799 command_runner.go:130] > # ]
	I0821 11:26:25.005965 2804799 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0821 11:26:25.005988 2804799 command_runner.go:130] > [crio.metrics]
	I0821 11:26:25.006050 2804799 command_runner.go:130] > # Globally enable or disable metrics support.
	I0821 11:26:25.006073 2804799 command_runner.go:130] > # enable_metrics = false
	I0821 11:26:25.006094 2804799 command_runner.go:130] > # Specify enabled metrics collectors.
	I0821 11:26:25.006126 2804799 command_runner.go:130] > # Per default all metrics are enabled.
	I0821 11:26:25.006156 2804799 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0821 11:26:25.006185 2804799 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0821 11:26:25.006207 2804799 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0821 11:26:25.006238 2804799 command_runner.go:130] > # metrics_collectors = [
	I0821 11:26:25.006265 2804799 command_runner.go:130] > # 	"operations",
	I0821 11:26:25.006289 2804799 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0821 11:26:25.006313 2804799 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0821 11:26:25.006345 2804799 command_runner.go:130] > # 	"operations_errors",
	I0821 11:26:25.006373 2804799 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0821 11:26:25.006396 2804799 command_runner.go:130] > # 	"image_pulls_by_name",
	I0821 11:26:25.006419 2804799 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0821 11:26:25.006453 2804799 command_runner.go:130] > # 	"image_pulls_failures",
	I0821 11:26:25.006477 2804799 command_runner.go:130] > # 	"image_pulls_successes",
	I0821 11:26:25.006498 2804799 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0821 11:26:25.006522 2804799 command_runner.go:130] > # 	"image_layer_reuse",
	I0821 11:26:25.006556 2804799 command_runner.go:130] > # 	"containers_oom_total",
	I0821 11:26:25.006579 2804799 command_runner.go:130] > # 	"containers_oom",
	I0821 11:26:25.006601 2804799 command_runner.go:130] > # 	"processes_defunct",
	I0821 11:26:25.006621 2804799 command_runner.go:130] > # 	"operations_total",
	I0821 11:26:25.006641 2804799 command_runner.go:130] > # 	"operations_latency_seconds",
	I0821 11:26:25.006675 2804799 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0821 11:26:25.006694 2804799 command_runner.go:130] > # 	"operations_errors_total",
	I0821 11:26:25.006714 2804799 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0821 11:26:25.006735 2804799 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0821 11:26:25.006771 2804799 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0821 11:26:25.006791 2804799 command_runner.go:130] > # 	"image_pulls_success_total",
	I0821 11:26:25.006813 2804799 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0821 11:26:25.006860 2804799 command_runner.go:130] > # 	"containers_oom_count_total",
	I0821 11:26:25.006881 2804799 command_runner.go:130] > # ]
	I0821 11:26:25.006901 2804799 command_runner.go:130] > # The port on which the metrics server will listen.
	I0821 11:26:25.006922 2804799 command_runner.go:130] > # metrics_port = 9090
	I0821 11:26:25.006944 2804799 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0821 11:26:25.006971 2804799 command_runner.go:130] > # metrics_socket = ""
	I0821 11:26:25.006996 2804799 command_runner.go:130] > # The certificate for the secure metrics server.
	I0821 11:26:25.007020 2804799 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0821 11:26:25.007044 2804799 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0821 11:26:25.007076 2804799 command_runner.go:130] > # certificate on any modification event.
	I0821 11:26:25.007104 2804799 command_runner.go:130] > # metrics_cert = ""
	I0821 11:26:25.007127 2804799 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0821 11:26:25.007147 2804799 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0821 11:26:25.007181 2804799 command_runner.go:130] > # metrics_key = ""
	I0821 11:26:25.007205 2804799 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0821 11:26:25.007224 2804799 command_runner.go:130] > [crio.tracing]
	I0821 11:26:25.007248 2804799 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0821 11:26:25.007280 2804799 command_runner.go:130] > # enable_tracing = false
	I0821 11:26:25.007304 2804799 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0821 11:26:25.007324 2804799 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0821 11:26:25.007347 2804799 command_runner.go:130] > # Number of samples to collect per million spans.
	I0821 11:26:25.007379 2804799 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0821 11:26:25.007402 2804799 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0821 11:26:25.007420 2804799 command_runner.go:130] > [crio.stats]
	I0821 11:26:25.007445 2804799 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0821 11:26:25.007477 2804799 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0821 11:26:25.007504 2804799 command_runner.go:130] > # stats_collection_period = 0
	I0821 11:26:25.007580 2804799 command_runner.go:130] ! time="2023-08-21 11:26:24.980222407Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0821 11:26:25.007613 2804799 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0821 11:26:25.007714 2804799 cni.go:84] Creating CNI manager for ""
	I0821 11:26:25.007738 2804799 cni.go:136] 2 nodes found, recommending kindnet
	I0821 11:26:25.007759 2804799 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0821 11:26:25.007792 2804799 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.27.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-994910 NodeName:multinode-994910-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0821 11:26:25.007971 2804799 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-994910-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0821 11:26:25.008048 2804799 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-994910-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.4 ClusterName:multinode-994910 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0821 11:26:25.008153 2804799 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.4
	I0821 11:26:25.018467 2804799 command_runner.go:130] > kubeadm
	I0821 11:26:25.018487 2804799 command_runner.go:130] > kubectl
	I0821 11:26:25.018493 2804799 command_runner.go:130] > kubelet
	I0821 11:26:25.019943 2804799 binaries.go:44] Found k8s binaries, skipping transfer
	I0821 11:26:25.020026 2804799 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0821 11:26:25.031372 2804799 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0821 11:26:25.053839 2804799 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0821 11:26:25.076609 2804799 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0821 11:26:25.081712 2804799 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0821 11:26:25.096778 2804799 host.go:66] Checking if "multinode-994910" exists ...
	I0821 11:26:25.097312 2804799 start.go:301] JoinCluster: &{Name:multinode-994910 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:multinode-994910 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:26:25.097442 2804799 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0821 11:26:25.097539 2804799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910
	I0821 11:26:25.097089 2804799 config.go:182] Loaded profile config "multinode-994910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 11:26:25.116259 2804799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36263 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910/id_rsa Username:docker}
	I0821 11:26:25.277487 2804799 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token rdy77u.en5seuzu38mnp2y8 --discovery-token-ca-cert-hash sha256:53df1391c07b454a6b96f5fce415fe23bfbfcda331215b828a9e1234aa2104c1 
	I0821 11:26:25.281121 2804799 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0821 11:26:25.281162 2804799 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rdy77u.en5seuzu38mnp2y8 --discovery-token-ca-cert-hash sha256:53df1391c07b454a6b96f5fce415fe23bfbfcda331215b828a9e1234aa2104c1 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-994910-m02"
	I0821 11:26:25.322800 2804799 command_runner.go:130] > [preflight] Running pre-flight checks
	I0821 11:26:25.365891 2804799 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0821 11:26:25.365917 2804799 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1041-aws
	I0821 11:26:25.365927 2804799 command_runner.go:130] > OS: Linux
	I0821 11:26:25.365934 2804799 command_runner.go:130] > CGROUPS_CPU: enabled
	I0821 11:26:25.365942 2804799 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0821 11:26:25.365948 2804799 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0821 11:26:25.365959 2804799 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0821 11:26:25.365966 2804799 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0821 11:26:25.365972 2804799 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0821 11:26:25.365981 2804799 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0821 11:26:25.365987 2804799 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0821 11:26:25.365998 2804799 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0821 11:26:25.484504 2804799 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0821 11:26:25.484528 2804799 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0821 11:26:25.514977 2804799 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0821 11:26:25.515246 2804799 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0821 11:26:25.515423 2804799 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0821 11:26:25.620158 2804799 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0821 11:26:28.142597 2804799 command_runner.go:130] > This node has joined the cluster:
	I0821 11:26:28.142625 2804799 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0821 11:26:28.142633 2804799 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0821 11:26:28.142642 2804799 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0821 11:26:28.145855 2804799 command_runner.go:130] ! W0821 11:26:25.322277    1026 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0821 11:26:28.145910 2804799 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1041-aws\n", err: exit status 1
	I0821 11:26:28.145924 2804799 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0821 11:26:28.145939 2804799 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rdy77u.en5seuzu38mnp2y8 --discovery-token-ca-cert-hash sha256:53df1391c07b454a6b96f5fce415fe23bfbfcda331215b828a9e1234aa2104c1 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-994910-m02": (2.864762963s)
	I0821 11:26:28.145956 2804799 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0821 11:26:28.370578 2804799 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0821 11:26:28.370611 2804799 start.go:303] JoinCluster complete in 3.273297825s
	I0821 11:26:28.370621 2804799 cni.go:84] Creating CNI manager for ""
	I0821 11:26:28.370627 2804799 cni.go:136] 2 nodes found, recommending kindnet
	I0821 11:26:28.370681 2804799 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0821 11:26:28.375464 2804799 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0821 11:26:28.375486 2804799 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0821 11:26:28.375494 2804799 command_runner.go:130] > Device: 36h/54d	Inode: 5713632     Links: 1
	I0821 11:26:28.375501 2804799 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0821 11:26:28.375508 2804799 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0821 11:26:28.375514 2804799 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0821 11:26:28.375521 2804799 command_runner.go:130] > Change: 2023-08-21 11:02:39.230246643 +0000
	I0821 11:26:28.375527 2804799 command_runner.go:130] >  Birth: 2023-08-21 11:02:39.186247522 +0000
	I0821 11:26:28.375921 2804799 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.27.4/kubectl ...
	I0821 11:26:28.375938 2804799 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0821 11:26:28.397862 2804799 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0821 11:26:28.733354 2804799 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0821 11:26:28.739284 2804799 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0821 11:26:28.742700 2804799 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0821 11:26:28.758308 2804799 command_runner.go:130] > daemonset.apps/kindnet configured
	I0821 11:26:28.764107 2804799 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:26:28.764428 2804799 kapi.go:59] client config for multinode-994910: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/client.crt", KeyFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/client.key", CAFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1721b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0821 11:26:28.764771 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0821 11:26:28.764786 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:28.764796 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:28.764804 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:28.767384 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:28.767406 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:28.767415 2804799 round_trippers.go:580]     Audit-Id: 8b355218-0693-4ff2-99d2-83da91753064
	I0821 11:26:28.767423 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:28.767429 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:28.767436 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:28.767442 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:28.767453 2804799 round_trippers.go:580]     Content-Length: 291
	I0821 11:26:28.767459 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:28 GMT
	I0821 11:26:28.767485 2804799 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"0178f5ef-d7de-4a72-bc3c-366a7efa6d34","resourceVersion":"458","creationTimestamp":"2023-08-21T11:25:26Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0821 11:26:28.767580 2804799 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-994910" context rescaled to 1 replicas
	I0821 11:26:28.767608 2804799 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0821 11:26:28.772597 2804799 out.go:177] * Verifying Kubernetes components...
	I0821 11:26:28.775729 2804799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 11:26:28.790250 2804799 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:26:28.790558 2804799 kapi.go:59] client config for multinode-994910: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/client.crt", KeyFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/multinode-994910/client.key", CAFile:"/home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1721b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0821 11:26:28.790827 2804799 node_ready.go:35] waiting up to 6m0s for node "multinode-994910-m02" to be "Ready" ...
	I0821 11:26:28.790895 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:28.790905 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:28.790915 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:28.790926 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:28.793471 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:28.793498 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:28.793507 2804799 round_trippers.go:580]     Audit-Id: c66231f6-33c6-4d99-8113-669256005fd4
	I0821 11:26:28.793514 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:28.793521 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:28.793527 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:28.793537 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:28.793544 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:28 GMT
	I0821 11:26:28.793717 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"493","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0821 11:26:28.794135 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:28.794149 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:28.794158 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:28.794165 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:28.796476 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:28.796559 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:28.796581 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:28 GMT
	I0821 11:26:28.796610 2804799 round_trippers.go:580]     Audit-Id: f2315a74-2af8-4244-9d9c-399ee31e9a27
	I0821 11:26:28.796630 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:28.796658 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:28.796672 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:28.796679 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:28.796787 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"493","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0821 11:26:29.297894 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:29.297916 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:29.297930 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:29.297938 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:29.300540 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:29.300609 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:29.300631 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:29.300651 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:29.300687 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:29.300715 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:29 GMT
	I0821 11:26:29.300728 2804799 round_trippers.go:580]     Audit-Id: ea91acac-cc61-4f45-bc40-0b2e64494df5
	I0821 11:26:29.300736 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:29.300860 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"493","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0821 11:26:29.798287 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:29.798352 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:29.798368 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:29.798376 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:29.800868 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:29.800890 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:29.800899 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:29.800906 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:29.800913 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:29.800920 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:29 GMT
	I0821 11:26:29.800926 2804799 round_trippers.go:580]     Audit-Id: 743870e6-7462-4a42-adc7-e2b2bf79161b
	I0821 11:26:29.800933 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:29.801120 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"506","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0821 11:26:30.297410 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:30.297435 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:30.297446 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:30.297453 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:30.300027 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:30.300097 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:30.300121 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:30.300141 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:30 GMT
	I0821 11:26:30.300176 2804799 round_trippers.go:580]     Audit-Id: 61e6a6fe-9506-4527-b7d7-39fd103eef76
	I0821 11:26:30.300202 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:30.300223 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:30.300254 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:30.300406 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"506","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0821 11:26:30.797901 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:30.797924 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:30.797935 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:30.797943 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:30.800408 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:30.800437 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:30.800447 2804799 round_trippers.go:580]     Audit-Id: 45d320d5-71ce-4c6d-b155-157b38fc9bc7
	I0821 11:26:30.800454 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:30.800461 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:30.800468 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:30.800479 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:30.800492 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:30 GMT
	I0821 11:26:30.800586 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"506","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0821 11:26:30.800943 2804799 node_ready.go:58] node "multinode-994910-m02" has status "Ready":"False"
	I0821 11:26:31.298113 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:31.298143 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:31.298154 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:31.298161 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:31.300492 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:31.300513 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:31.300522 2804799 round_trippers.go:580]     Audit-Id: b77341ee-dc3f-40dc-9e09-54758a92fbb8
	I0821 11:26:31.300529 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:31.300535 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:31.300542 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:31.300552 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:31.300559 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:31 GMT
	I0821 11:26:31.300793 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"506","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0821 11:26:31.797998 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:31.798020 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:31.798029 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:31.798037 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:31.800659 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:31.800679 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:31.800688 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:31.800695 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:31 GMT
	I0821 11:26:31.800701 2804799 round_trippers.go:580]     Audit-Id: fe7a0598-3abb-4a0a-83d8-f89ccac980cd
	I0821 11:26:31.800708 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:31.800714 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:31.800721 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:31.800858 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"506","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0821 11:26:32.298210 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:32.298235 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:32.298245 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:32.298253 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:32.300930 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:32.300952 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:32.300961 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:32.300969 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:32.300976 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:32.300983 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:32.300990 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:32 GMT
	I0821 11:26:32.300996 2804799 round_trippers.go:580]     Audit-Id: 5e676c7a-6d34-468d-a0c4-e8995b0e03e6
	I0821 11:26:32.301135 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"506","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0821 11:26:32.797251 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:32.797277 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:32.797287 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:32.797295 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:32.799692 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:32.799714 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:32.799723 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:32.799730 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:32.799736 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:32.799747 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:32 GMT
	I0821 11:26:32.799755 2804799 round_trippers.go:580]     Audit-Id: 3545346d-5bdd-49f9-b7c0-a82241126162
	I0821 11:26:32.799762 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:32.800063 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"506","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0821 11:26:33.297378 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:33.297401 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:33.297411 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:33.297418 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:33.300105 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:33.300139 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:33.300148 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:33.300161 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:33 GMT
	I0821 11:26:33.300168 2804799 round_trippers.go:580]     Audit-Id: a4538bd5-0cfe-4d22-8ba9-8b9cab6ef671
	I0821 11:26:33.300177 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:33.300187 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:33.300206 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:33.300333 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"506","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0821 11:26:33.300804 2804799 node_ready.go:58] node "multinode-994910-m02" has status "Ready":"False"
	I0821 11:26:33.798237 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:33.798262 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:33.798272 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:33.798280 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:33.800754 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:33.800778 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:33.800787 2804799 round_trippers.go:580]     Audit-Id: 5eefeca1-46db-41ad-82b9-b22ba7f0a2e0
	I0821 11:26:33.800794 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:33.800800 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:33.800807 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:33.800814 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:33.800821 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:33 GMT
	I0821 11:26:33.800918 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"506","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0821 11:26:34.298020 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:34.298046 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:34.298056 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:34.298065 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:34.300642 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:34.300667 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:34.300676 2804799 round_trippers.go:580]     Audit-Id: 1bc811e3-4a05-4513-83a7-cb68416ad4ce
	I0821 11:26:34.300684 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:34.300690 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:34.300697 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:34.300704 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:34.300716 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:34 GMT
	I0821 11:26:34.300833 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"506","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0821 11:26:34.797284 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:34.797307 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:34.797317 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:34.797325 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:34.799761 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:34.799782 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:34.799791 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:34.799798 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:34.799804 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:34.799811 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:34.799818 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:34 GMT
	I0821 11:26:34.799825 2804799 round_trippers.go:580]     Audit-Id: 6cc734a6-6caf-42d4-9c4b-ac9041e3197b
	I0821 11:26:34.799929 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"506","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0821 11:26:35.298033 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:35.298055 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:35.298065 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:35.298073 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:35.300622 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:35.300643 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:35.300652 2804799 round_trippers.go:580]     Audit-Id: cdf95d22-e429-4cc0-a19c-00419731e65e
	I0821 11:26:35.300659 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:35.300666 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:35.300672 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:35.300679 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:35.300686 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:35 GMT
	I0821 11:26:35.300842 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"506","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0821 11:26:35.301198 2804799 node_ready.go:58] node "multinode-994910-m02" has status "Ready":"False"
	I0821 11:26:35.797814 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:35.797837 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:35.797847 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:35.797856 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:35.800365 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:35.800388 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:35.800397 2804799 round_trippers.go:580]     Audit-Id: 612bb04e-5b64-4496-8920-7d2d20447db0
	I0821 11:26:35.800403 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:35.800410 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:35.800416 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:35.800423 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:35.800430 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:35 GMT
	I0821 11:26:35.800518 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"506","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0821 11:26:36.297462 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:36.297487 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:36.297501 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:36.297509 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:36.299968 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:36.299989 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:36.299998 2804799 round_trippers.go:580]     Audit-Id: 15bffc91-11d6-40ef-974f-c4896616c04a
	I0821 11:26:36.300006 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:36.300013 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:36.300019 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:36.300025 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:36.300032 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:36 GMT
	I0821 11:26:36.300140 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"506","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0821 11:26:36.797303 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:36.797323 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:36.797334 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:36.797341 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:36.800260 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:36.800288 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:36.800298 2804799 round_trippers.go:580]     Audit-Id: a452d885-038b-4067-ae63-3e9698cae681
	I0821 11:26:36.800305 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:36.800312 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:36.800319 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:36.800329 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:36.800341 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:36 GMT
	I0821 11:26:36.800436 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"506","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0821 11:26:37.297970 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:37.297995 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:37.298006 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:37.298014 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:37.300388 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:37.300407 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:37.300416 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:37.300422 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:37 GMT
	I0821 11:26:37.300429 2804799 round_trippers.go:580]     Audit-Id: 85e79cb0-045d-4be1-aa41-661f5da384c4
	I0821 11:26:37.300435 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:37.300442 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:37.300449 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:37.300557 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"506","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0821 11:26:37.797350 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:37.797371 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:37.797381 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:37.797389 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:37.799898 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:37.799922 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:37.799932 2804799 round_trippers.go:580]     Audit-Id: a43f3a2f-0c9b-413a-8dc2-9391b96f16d3
	I0821 11:26:37.799939 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:37.799945 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:37.799952 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:37.799958 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:37.799970 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:37 GMT
	I0821 11:26:37.800082 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"506","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0821 11:26:37.800477 2804799 node_ready.go:58] node "multinode-994910-m02" has status "Ready":"False"
	I0821 11:26:38.298228 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:38.298253 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:38.298262 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:38.298270 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:38.300738 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:38.300760 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:38.300769 2804799 round_trippers.go:580]     Audit-Id: 92f7ebb9-c422-4728-bbe6-a10892bc060c
	I0821 11:26:38.300776 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:38.300783 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:38.300790 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:38.300799 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:38.300806 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:38 GMT
	I0821 11:26:38.301137 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"506","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I0821 11:26:38.798230 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:38.798255 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:38.798266 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:38.798273 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:38.800673 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:38.800696 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:38.800705 2804799 round_trippers.go:580]     Audit-Id: 83d13669-d5de-40bf-b133-ca5c584d1be4
	I0821 11:26:38.800712 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:38.800718 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:38.800725 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:38.800732 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:38.800742 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:38 GMT
	I0821 11:26:38.800984 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:39.297928 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:39.297951 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:39.297962 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:39.297969 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:39.300628 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:39.300651 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:39.300659 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:39 GMT
	I0821 11:26:39.300667 2804799 round_trippers.go:580]     Audit-Id: 63e3e568-01b0-4647-acd5-944b5d260b40
	I0821 11:26:39.300673 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:39.300683 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:39.300690 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:39.300700 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:39.300907 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:39.797512 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:39.797537 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:39.797548 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:39.797556 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:39.800112 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:39.800135 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:39.800143 2804799 round_trippers.go:580]     Audit-Id: fc62686a-80bf-4441-ae23-529965a891e4
	I0821 11:26:39.800150 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:39.800157 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:39.800163 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:39.800171 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:39.800177 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:39 GMT
	I0821 11:26:39.800268 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:39.800635 2804799 node_ready.go:58] node "multinode-994910-m02" has status "Ready":"False"
	I0821 11:26:40.298268 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:40.298292 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:40.298302 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:40.298309 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:40.300935 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:40.300963 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:40.300972 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:40.300978 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:40.300985 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:40.300992 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:40.301000 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:40 GMT
	I0821 11:26:40.301007 2804799 round_trippers.go:580]     Audit-Id: 6637ec34-335f-4384-8c5d-0914f7634b39
	I0821 11:26:40.301114 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:40.798229 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:40.798250 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:40.798259 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:40.798266 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:40.800848 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:40.800869 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:40.800878 2804799 round_trippers.go:580]     Audit-Id: c99595fd-de0a-4d66-ac27-4c2cb1197157
	I0821 11:26:40.800884 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:40.800891 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:40.800897 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:40.800904 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:40.800910 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:40 GMT
	I0821 11:26:40.801009 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:41.298134 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:41.298162 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:41.298172 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:41.298180 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:41.300794 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:41.300822 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:41.300833 2804799 round_trippers.go:580]     Audit-Id: 3f524795-7dc6-4f74-9266-5db44629a78f
	I0821 11:26:41.300841 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:41.300848 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:41.300854 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:41.300860 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:41.300868 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:41 GMT
	I0821 11:26:41.300979 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:41.798211 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:41.798235 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:41.798245 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:41.798253 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:41.800788 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:41.800813 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:41.800822 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:41.800829 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:41.800836 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:41 GMT
	I0821 11:26:41.800843 2804799 round_trippers.go:580]     Audit-Id: 705260a7-7622-4809-b5b6-700a35139c37
	I0821 11:26:41.800850 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:41.800859 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:41.800975 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:41.801346 2804799 node_ready.go:58] node "multinode-994910-m02" has status "Ready":"False"
	I0821 11:26:42.298118 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:42.298145 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:42.298156 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:42.298163 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:42.300867 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:42.300895 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:42.300904 2804799 round_trippers.go:580]     Audit-Id: 79055462-8816-45f5-bf04-d29d61b5dc0c
	I0821 11:26:42.300911 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:42.300918 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:42.300924 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:42.300931 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:42.300938 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:42 GMT
	I0821 11:26:42.301070 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:42.798205 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:42.798229 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:42.798240 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:42.798248 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:42.800817 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:42.800845 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:42.800855 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:42.800862 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:42.800870 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:42 GMT
	I0821 11:26:42.800877 2804799 round_trippers.go:580]     Audit-Id: e1a87c83-8aac-47b3-94dc-56c732f04544
	I0821 11:26:42.800887 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:42.800898 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:42.800997 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:43.297458 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:43.297480 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:43.297491 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:43.297499 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:43.299933 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:43.299967 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:43.299977 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:43.299984 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:43 GMT
	I0821 11:26:43.299991 2804799 round_trippers.go:580]     Audit-Id: dbf1caea-8ed5-4ab5-8447-d2ee8e8300ea
	I0821 11:26:43.299997 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:43.300004 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:43.300011 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:43.300122 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:43.798272 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:43.798298 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:43.798309 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:43.798321 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:43.800878 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:43.800904 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:43.800914 2804799 round_trippers.go:580]     Audit-Id: 5623c494-039c-49b5-bfb0-c3d6052461e7
	I0821 11:26:43.800921 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:43.800928 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:43.800938 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:43.800946 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:43.800960 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:43 GMT
	I0821 11:26:43.801154 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:43.801542 2804799 node_ready.go:58] node "multinode-994910-m02" has status "Ready":"False"
	I0821 11:26:44.297352 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:44.297374 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:44.297384 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:44.297391 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:44.299887 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:44.299914 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:44.299924 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:44.299931 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:44.299938 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:44 GMT
	I0821 11:26:44.299944 2804799 round_trippers.go:580]     Audit-Id: 40af6994-c5c9-4e6d-83eb-c1cf0be15011
	I0821 11:26:44.299950 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:44.299957 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:44.300053 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:44.797349 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:44.797376 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:44.797386 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:44.797393 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:44.800285 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:44.800312 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:44.800321 2804799 round_trippers.go:580]     Audit-Id: 6145d430-fd73-4602-be08-d75091f1d8fb
	I0821 11:26:44.800329 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:44.800336 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:44.800344 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:44.800351 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:44.800357 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:44 GMT
	I0821 11:26:44.800449 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:45.297452 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:45.297491 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:45.297502 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:45.297519 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:45.300565 2804799 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0821 11:26:45.300593 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:45.300602 2804799 round_trippers.go:580]     Audit-Id: 3b038432-e89f-4c89-9408-b870e81280ed
	I0821 11:26:45.300610 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:45.300617 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:45.300624 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:45.300630 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:45.300636 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:45 GMT
	I0821 11:26:45.300739 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:45.797634 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:45.797660 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:45.797671 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:45.797678 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:45.800172 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:45.800195 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:45.800205 2804799 round_trippers.go:580]     Audit-Id: 302afd96-e235-4e5b-aa0b-62541fd5a7af
	I0821 11:26:45.800212 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:45.800219 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:45.800225 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:45.800232 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:45.800239 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:45 GMT
	I0821 11:26:45.800325 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:46.297603 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:46.297625 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:46.297635 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:46.297643 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:46.300471 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:46.300505 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:46.300515 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:46 GMT
	I0821 11:26:46.300522 2804799 round_trippers.go:580]     Audit-Id: 668cdf31-97b8-4d8d-89f8-d129f78ab913
	I0821 11:26:46.300529 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:46.300535 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:46.300542 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:46.300548 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:46.300824 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:46.301208 2804799 node_ready.go:58] node "multinode-994910-m02" has status "Ready":"False"
	I0821 11:26:46.798223 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:46.798249 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:46.798265 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:46.798281 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:46.800861 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:46.800884 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:46.800893 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:46 GMT
	I0821 11:26:46.800901 2804799 round_trippers.go:580]     Audit-Id: 3cefd562-590a-4d0a-9cce-7f134b2c1571
	I0821 11:26:46.800908 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:46.800914 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:46.800921 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:46.800928 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:46.801016 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:47.297621 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:47.297662 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:47.297673 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:47.297680 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:47.300207 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:47.300239 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:47.300248 2804799 round_trippers.go:580]     Audit-Id: f03e97f4-400c-43b2-aaf9-e8c023d46e9c
	I0821 11:26:47.300255 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:47.300262 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:47.300268 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:47.300275 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:47.300282 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:47 GMT
	I0821 11:26:47.300495 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:47.798045 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:47.798071 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:47.798082 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:47.798095 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:47.800786 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:47.800810 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:47.800820 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:47.800828 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:47.800834 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:47 GMT
	I0821 11:26:47.800842 2804799 round_trippers.go:580]     Audit-Id: 464a8ce5-ba9f-4363-95c9-3998863d8b9e
	I0821 11:26:47.800848 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:47.800855 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:47.800974 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:48.297645 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:48.297670 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:48.297681 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:48.297689 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:48.300125 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:48.300145 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:48.300153 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:48.300160 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:48 GMT
	I0821 11:26:48.300167 2804799 round_trippers.go:580]     Audit-Id: 67f4089d-ba8d-48dd-a1cd-4556f34f6e77
	I0821 11:26:48.300173 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:48.300180 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:48.300187 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:48.300281 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:48.798283 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:48.798309 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:48.798319 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:48.798326 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:48.800846 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:48.800867 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:48.800876 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:48 GMT
	I0821 11:26:48.800883 2804799 round_trippers.go:580]     Audit-Id: 80382ab0-b947-4d9c-8d35-f75444d6ec32
	I0821 11:26:48.800889 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:48.800896 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:48.800902 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:48.800909 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:48.801165 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:48.801551 2804799 node_ready.go:58] node "multinode-994910-m02" has status "Ready":"False"
	I0821 11:26:49.297863 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:49.297906 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:49.297917 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:49.297924 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:49.300466 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:49.300493 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:49.300502 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:49.300508 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:49.300515 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:49.300522 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:49.300531 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:49 GMT
	I0821 11:26:49.300538 2804799 round_trippers.go:580]     Audit-Id: 24d317d4-70d4-49b2-a76f-805c254ecc4d
	I0821 11:26:49.300827 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:49.797435 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:49.797457 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:49.797467 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:49.797474 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:49.800393 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:49.800414 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:49.800422 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:49.800429 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:49.800436 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:49.800442 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:49 GMT
	I0821 11:26:49.800449 2804799 round_trippers.go:580]     Audit-Id: c9a24b5b-ff70-467d-84cc-da3c2e73beba
	I0821 11:26:49.800456 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:49.800578 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:50.297700 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:50.297724 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:50.297736 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:50.297743 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:50.300363 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:50.300390 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:50.300399 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:50.300406 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:50.300413 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:50.300419 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:50 GMT
	I0821 11:26:50.300427 2804799 round_trippers.go:580]     Audit-Id: f2c5feb1-54c4-4783-ad5a-99bda3ad4f9c
	I0821 11:26:50.300434 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:50.300532 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:50.797313 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:50.797337 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:50.797347 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:50.797355 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:50.799972 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:50.799999 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:50.800008 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:50.800015 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:50.800021 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:50.800028 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:50.800035 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:50 GMT
	I0821 11:26:50.800045 2804799 round_trippers.go:580]     Audit-Id: ed4f4e5c-3a6c-442e-96c2-902d8b6b8b92
	I0821 11:26:50.800131 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:51.298024 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:51.298047 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:51.298057 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:51.298064 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:51.300640 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:51.300666 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:51.300676 2804799 round_trippers.go:580]     Audit-Id: 1af85e66-159f-488d-81b2-009d7a70621f
	I0821 11:26:51.300683 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:51.300690 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:51.300697 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:51.300706 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:51.300722 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:51 GMT
	I0821 11:26:51.300939 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:51.301327 2804799 node_ready.go:58] node "multinode-994910-m02" has status "Ready":"False"
	I0821 11:26:51.797764 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:51.797789 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:51.797799 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:51.797806 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:51.800191 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:51.800211 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:51.800219 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:51.800227 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:51.800234 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:51.800240 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:51.800247 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:51 GMT
	I0821 11:26:51.800253 2804799 round_trippers.go:580]     Audit-Id: 871f4953-07ce-4216-86ce-b660f40c0a97
	I0821 11:26:51.800365 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:52.297350 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:52.297378 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:52.297389 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:52.297397 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:52.299928 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:52.299955 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:52.299965 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:52.299972 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:52.299980 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:52.299990 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:52 GMT
	I0821 11:26:52.299999 2804799 round_trippers.go:580]     Audit-Id: d4da0728-a63f-43b1-9ee9-0121babe81a4
	I0821 11:26:52.300006 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:52.300417 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:52.797356 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:52.797383 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:52.797394 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:52.797401 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:52.799845 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:52.799865 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:52.799873 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:52.799880 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:52.799887 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:52 GMT
	I0821 11:26:52.799894 2804799 round_trippers.go:580]     Audit-Id: e043863e-c731-4e31-96be-46cdef7af51a
	I0821 11:26:52.799902 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:52.799908 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:52.800010 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:53.298193 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:53.298217 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:53.298227 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:53.298235 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:53.300652 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:53.300677 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:53.300686 2804799 round_trippers.go:580]     Audit-Id: b5f711c0-4cac-4841-91d4-3cd264594869
	I0821 11:26:53.300693 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:53.300700 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:53.300716 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:53.300723 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:53.300734 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:53 GMT
	I0821 11:26:53.301014 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:53.301392 2804799 node_ready.go:58] node "multinode-994910-m02" has status "Ready":"False"
	I0821 11:26:53.797372 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:53.797395 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:53.797405 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:53.797412 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:53.799923 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:53.799950 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:53.799958 2804799 round_trippers.go:580]     Audit-Id: c6be56aa-79a4-4aea-90b3-cb2dfe9c7d25
	I0821 11:26:53.799973 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:53.799981 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:53.799989 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:53.799999 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:53.800006 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:53 GMT
	I0821 11:26:53.800121 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:54.298279 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:54.298306 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:54.298318 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:54.298325 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:54.300806 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:54.300828 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:54.300836 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:54.300843 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:54.300849 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:54.300856 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:54.300862 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:54 GMT
	I0821 11:26:54.300869 2804799 round_trippers.go:580]     Audit-Id: befaf9df-9c61-42f8-9ace-2886a003deb9
	I0821 11:26:54.300971 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:54.798271 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:54.798297 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:54.798308 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:54.798320 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:54.800734 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:54.800759 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:54.800768 2804799 round_trippers.go:580]     Audit-Id: 67b39aaa-d8ab-4e58-9d40-8c157956d5b5
	I0821 11:26:54.800775 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:54.800781 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:54.800787 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:54.800794 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:54.800801 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:54 GMT
	I0821 11:26:54.800889 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:55.298010 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:55.298036 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:55.298047 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:55.298055 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:55.300964 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:55.300988 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:55.300998 2804799 round_trippers.go:580]     Audit-Id: 33e3f8c6-ca83-412a-9f9b-06de81513dbd
	I0821 11:26:55.301005 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:55.301012 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:55.301018 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:55.301024 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:55.301032 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:55 GMT
	I0821 11:26:55.301135 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:55.301505 2804799 node_ready.go:58] node "multinode-994910-m02" has status "Ready":"False"
	I0821 11:26:55.798255 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:55.798277 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:55.798287 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:55.798299 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:55.800722 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:55.800743 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:55.800751 2804799 round_trippers.go:580]     Audit-Id: be561b97-3434-4328-9227-ee12ed3d8df5
	I0821 11:26:55.800758 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:55.800764 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:55.800771 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:55.800777 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:55.800784 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:55 GMT
	I0821 11:26:55.800872 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:56.297418 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:56.297440 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:56.297450 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:56.297458 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:56.299879 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:56.299905 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:56.299914 2804799 round_trippers.go:580]     Audit-Id: 1469f637-876b-4c75-99bf-e50de0a43b7e
	I0821 11:26:56.299921 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:56.299927 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:56.299934 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:56.299941 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:56.299950 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:56 GMT
	I0821 11:26:56.300102 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:56.797439 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:56.797476 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:56.797493 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:56.797508 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:56.800050 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:56.800075 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:56.800084 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:56.800092 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:56 GMT
	I0821 11:26:56.800098 2804799 round_trippers.go:580]     Audit-Id: 70153db0-b7b4-4ece-9720-26cdc8bd108a
	I0821 11:26:56.800105 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:56.800111 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:56.800119 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:56.800207 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:57.297259 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:57.297284 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:57.297295 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:57.297302 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:57.299792 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:57.299813 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:57.299821 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:57.299828 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:57.299834 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:57.299841 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:57 GMT
	I0821 11:26:57.299848 2804799 round_trippers.go:580]     Audit-Id: e778eff0-d292-4cd7-95f6-883fc6d74a58
	I0821 11:26:57.299856 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:57.299956 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:57.798101 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:57.798128 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:57.798138 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:57.798146 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:57.800557 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:57.800582 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:57.800591 2804799 round_trippers.go:580]     Audit-Id: bce57a7a-9b2a-49f5-9107-0f99043e4dd5
	I0821 11:26:57.800598 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:57.800605 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:57.800612 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:57.800618 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:57.800626 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:57 GMT
	I0821 11:26:57.800713 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:57.801083 2804799 node_ready.go:58] node "multinode-994910-m02" has status "Ready":"False"
	I0821 11:26:58.297545 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:58.297592 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:58.297602 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:58.297610 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:58.300561 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:58.300592 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:58.300601 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:58.300608 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:58.300614 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:58.300621 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:58 GMT
	I0821 11:26:58.300628 2804799 round_trippers.go:580]     Audit-Id: 3523aa8d-284b-49ce-9097-355ae2d1e9a4
	I0821 11:26:58.300636 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:58.300771 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:58.797298 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:58.797323 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:58.797333 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:58.797342 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:58.799962 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:58.799991 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:58.800000 2804799 round_trippers.go:580]     Audit-Id: 402f6149-fe3d-42e2-abf0-ac1d9dd0cf8a
	I0821 11:26:58.800007 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:58.800014 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:58.800021 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:58.800028 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:58.800035 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:58 GMT
	I0821 11:26:58.800142 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:59.298313 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:59.298336 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:59.298346 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:59.298353 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:59.300802 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:59.300824 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:59.300832 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:59.300842 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:59 GMT
	I0821 11:26:59.300849 2804799 round_trippers.go:580]     Audit-Id: a0aea0f2-9907-45e7-8c68-00546a55515d
	I0821 11:26:59.300856 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:59.300862 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:59.300869 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:59.301151 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"520","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5561 chars]
	I0821 11:26:59.798291 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:26:59.798314 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:59.798324 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:59.798331 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:59.800832 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:59.800857 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:59.800866 2804799 round_trippers.go:580]     Audit-Id: 403f5900-8535-4614-adc4-c10dac4a233d
	I0821 11:26:59.800873 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:59.800880 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:59.800887 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:59.800898 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:59.800910 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:59 GMT
	I0821 11:26:59.801042 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"541","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I0821 11:26:59.801430 2804799 node_ready.go:49] node "multinode-994910-m02" has status "Ready":"True"
	I0821 11:26:59.801448 2804799 node_ready.go:38] duration metric: took 31.010605902s waiting for node "multinode-994910-m02" to be "Ready" ...
	I0821 11:26:59.801457 2804799 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 11:26:59.801531 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0821 11:26:59.801542 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:59.801550 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:59.801557 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:59.805479 2804799 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0821 11:26:59.805509 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:59.805518 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:59.805524 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:59.805531 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:59.805538 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:59.805545 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:59 GMT
	I0821 11:26:59.805551 2804799 round_trippers.go:580]     Audit-Id: bd383512-45b0-405c-808d-59fe7ee31eb3
	I0821 11:26:59.806026 2804799 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"541"},"items":[{"metadata":{"name":"coredns-5d78c9869d-zj5f8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"b6aeac2c-fd47-4855-8a60-675aa03078a6","resourceVersion":"454","creationTimestamp":"2023-08-21T11:25:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"6f5458c4-0287-4acb-a4c3-19fd45c7091a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:25:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f5458c4-0287-4acb-a4c3-19fd45c7091a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68974 chars]
	I0821 11:26:59.808968 2804799 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-zj5f8" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:59.809103 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-zj5f8
	I0821 11:26:59.809119 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:59.809129 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:59.809136 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:59.811634 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:59.811666 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:59.811675 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:59.811682 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:59.811689 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:59.811700 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:59 GMT
	I0821 11:26:59.811707 2804799 round_trippers.go:580]     Audit-Id: 5f6a6f8c-f9dd-41e1-a1a1-d12a52bac153
	I0821 11:26:59.811718 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:59.812015 2804799 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-zj5f8","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"b6aeac2c-fd47-4855-8a60-675aa03078a6","resourceVersion":"454","creationTimestamp":"2023-08-21T11:25:40Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"6f5458c4-0287-4acb-a4c3-19fd45c7091a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:25:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f5458c4-0287-4acb-a4c3-19fd45c7091a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0821 11:26:59.812574 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:59.812592 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:59.812601 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:59.812609 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:59.815053 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:59.815073 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:59.815082 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:59.815090 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:59 GMT
	I0821 11:26:59.815097 2804799 round_trippers.go:580]     Audit-Id: cfa7320e-2996-4313-a39c-cb741b3de07c
	I0821 11:26:59.815107 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:59.815117 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:59.815123 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:59.815377 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"435","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0821 11:26:59.815775 2804799 pod_ready.go:92] pod "coredns-5d78c9869d-zj5f8" in "kube-system" namespace has status "Ready":"True"
	I0821 11:26:59.815793 2804799 pod_ready.go:81] duration metric: took 6.799947ms waiting for pod "coredns-5d78c9869d-zj5f8" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:59.815804 2804799 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-994910" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:59.815862 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-994910
	I0821 11:26:59.815872 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:59.815881 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:59.815888 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:59.818399 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:59.818424 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:59.818433 2804799 round_trippers.go:580]     Audit-Id: 7ce2ab30-7971-419b-9c71-70bef8278750
	I0821 11:26:59.818440 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:59.818448 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:59.818455 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:59.818461 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:59.818468 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:59 GMT
	I0821 11:26:59.818572 2804799 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-994910","namespace":"kube-system","uid":"24d87a69-0a05-42d6-ba48-1d33fb7412be","resourceVersion":"425","creationTimestamp":"2023-08-21T11:25:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"9b1925099c14da60e336ef1734e7725e","kubernetes.io/config.mirror":"9b1925099c14da60e336ef1734e7725e","kubernetes.io/config.seen":"2023-08-21T11:25:26.585338579Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0821 11:26:59.819021 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:59.819037 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:59.819046 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:59.819053 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:59.821439 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:59.821489 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:59.821524 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:59 GMT
	I0821 11:26:59.821545 2804799 round_trippers.go:580]     Audit-Id: 8e584f3a-cd8f-4db5-9713-45d238415179
	I0821 11:26:59.821593 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:59.821618 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:59.821631 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:59.821638 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:59.821796 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"435","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0821 11:26:59.822244 2804799 pod_ready.go:92] pod "etcd-multinode-994910" in "kube-system" namespace has status "Ready":"True"
	I0821 11:26:59.822263 2804799 pod_ready.go:81] duration metric: took 6.453074ms waiting for pod "etcd-multinode-994910" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:59.822297 2804799 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-994910" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:59.822371 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-994910
	I0821 11:26:59.822380 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:59.822389 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:59.822397 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:59.824828 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:59.824848 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:59.824856 2804799 round_trippers.go:580]     Audit-Id: 3f1073c6-4a0b-444e-b14d-133e1951a1dd
	I0821 11:26:59.824863 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:59.824869 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:59.824876 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:59.824887 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:59.824900 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:59 GMT
	I0821 11:26:59.825208 2804799 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-994910","namespace":"kube-system","uid":"41fedc8e-465b-4561-977c-624f45660c46","resourceVersion":"424","creationTimestamp":"2023-08-21T11:25:26Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"cde3bdd814a40f419d40c9c13bc7666b","kubernetes.io/config.mirror":"cde3bdd814a40f419d40c9c13bc7666b","kubernetes.io/config.seen":"2023-08-21T11:25:26.585339867Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0821 11:26:59.825746 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:59.825762 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:59.825772 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:59.825779 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:59.828119 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:59.828141 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:59.828150 2804799 round_trippers.go:580]     Audit-Id: 24b8bdf7-5e02-4b56-b545-d6a2a559a05b
	I0821 11:26:59.828156 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:59.828163 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:59.828170 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:59.828176 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:59.828186 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:59 GMT
	I0821 11:26:59.828331 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"435","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0821 11:26:59.828714 2804799 pod_ready.go:92] pod "kube-apiserver-multinode-994910" in "kube-system" namespace has status "Ready":"True"
	I0821 11:26:59.828731 2804799 pod_ready.go:81] duration metric: took 6.422806ms waiting for pod "kube-apiserver-multinode-994910" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:59.828742 2804799 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-994910" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:59.828808 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-994910
	I0821 11:26:59.828820 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:59.828837 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:59.828847 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:59.831404 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:59.831435 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:59.831457 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:59.831471 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:59.831479 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:59 GMT
	I0821 11:26:59.831489 2804799 round_trippers.go:580]     Audit-Id: 869ff697-9432-4b80-8fc8-6fa906c5b834
	I0821 11:26:59.831496 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:59.831519 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:59.831693 2804799 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-994910","namespace":"kube-system","uid":"884f2285-c54f-4972-bdab-2e0f7a2bf63d","resourceVersion":"422","creationTimestamp":"2023-08-21T11:25:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4f6eec765e4666706050a72ce2877339","kubernetes.io/config.mirror":"4f6eec765e4666706050a72ce2877339","kubernetes.io/config.seen":"2023-08-21T11:25:26.585331506Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:25:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0821 11:26:59.832237 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:26:59.832255 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:59.832264 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:59.832272 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:26:59.834812 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:26:59.834862 2804799 round_trippers.go:577] Response Headers:
	I0821 11:26:59.834900 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:26:59.834926 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:26:59.834960 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:26:59.834968 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:26:59.834975 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:26:59 GMT
	I0821 11:26:59.834985 2804799 round_trippers.go:580]     Audit-Id: 81291354-5dcb-495e-a165-ad21a55a024f
	I0821 11:26:59.835093 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"435","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0821 11:26:59.835542 2804799 pod_ready.go:92] pod "kube-controller-manager-multinode-994910" in "kube-system" namespace has status "Ready":"True"
	I0821 11:26:59.835562 2804799 pod_ready.go:81] duration metric: took 6.8072ms waiting for pod "kube-controller-manager-multinode-994910" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:59.835574 2804799 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-758dj" in "kube-system" namespace to be "Ready" ...
	I0821 11:26:59.998947 2804799 request.go:629] Waited for 163.305425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-758dj
	I0821 11:26:59.999030 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-758dj
	I0821 11:26:59.999041 2804799 round_trippers.go:469] Request Headers:
	I0821 11:26:59.999056 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:26:59.999069 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:27:00.004126 2804799 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0821 11:27:00.004206 2804799 round_trippers.go:577] Response Headers:
	I0821 11:27:00.004223 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:27:00.004231 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:27:00.004238 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:27:00.004245 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:27:00 GMT
	I0821 11:27:00.004252 2804799 round_trippers.go:580]     Audit-Id: b1473ac9-8e85-48f4-a38b-ba544fc5eeb8
	I0821 11:27:00.004259 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:27:00.004417 2804799 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-758dj","generateName":"kube-proxy-","namespace":"kube-system","uid":"f2232edb-23d3-4789-86a0-9e3cd68aeea3","resourceVersion":"416","creationTimestamp":"2023-08-21T11:25:40Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"85e90316-63be-42e0-89ab-cb4dd52d7cf1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:25:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85e90316-63be-42e0-89ab-cb4dd52d7cf1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0821 11:27:00.198789 2804799 request.go:629] Waited for 193.839646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:27:00.198873 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:27:00.198892 2804799 round_trippers.go:469] Request Headers:
	I0821 11:27:00.198902 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:27:00.198917 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:27:00.201869 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:27:00.201917 2804799 round_trippers.go:577] Response Headers:
	I0821 11:27:00.201926 2804799 round_trippers.go:580]     Audit-Id: 53f352fd-78c5-4f02-a80f-7085fb4fb4c4
	I0821 11:27:00.201932 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:27:00.201940 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:27:00.201946 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:27:00.201953 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:27:00.201960 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:27:00 GMT
	I0821 11:27:00.202385 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"435","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0821 11:27:00.202843 2804799 pod_ready.go:92] pod "kube-proxy-758dj" in "kube-system" namespace has status "Ready":"True"
	I0821 11:27:00.202859 2804799 pod_ready.go:81] duration metric: took 367.27571ms waiting for pod "kube-proxy-758dj" in "kube-system" namespace to be "Ready" ...
	I0821 11:27:00.202871 2804799 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cmkk5" in "kube-system" namespace to be "Ready" ...
	I0821 11:27:00.399299 2804799 request.go:629] Waited for 196.357996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cmkk5
	I0821 11:27:00.399382 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cmkk5
	I0821 11:27:00.399410 2804799 round_trippers.go:469] Request Headers:
	I0821 11:27:00.399424 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:27:00.399443 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:27:00.402788 2804799 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0821 11:27:00.402841 2804799 round_trippers.go:577] Response Headers:
	I0821 11:27:00.402863 2804799 round_trippers.go:580]     Audit-Id: 21f81613-a284-43a0-b12c-5b9dd6cc5b36
	I0821 11:27:00.402885 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:27:00.402921 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:27:00.402945 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:27:00.402967 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:27:00.402991 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:27:00 GMT
	I0821 11:27:00.403318 2804799 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-cmkk5","generateName":"kube-proxy-","namespace":"kube-system","uid":"4657542d-dd24-4cc8-8f1c-01f056ffac7a","resourceVersion":"509","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"controller-revision-hash":"86cc8bcbf7","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"85e90316-63be-42e0-89ab-cb4dd52d7cf1","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"85e90316-63be-42e0-89ab-cb4dd52d7cf1\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0821 11:27:00.599142 2804799 request.go:629] Waited for 195.342221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:27:00.599203 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910-m02
	I0821 11:27:00.599213 2804799 round_trippers.go:469] Request Headers:
	I0821 11:27:00.599225 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:27:00.599235 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:27:00.601759 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:27:00.601784 2804799 round_trippers.go:577] Response Headers:
	I0821 11:27:00.601793 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:27:00 GMT
	I0821 11:27:00.601801 2804799 round_trippers.go:580]     Audit-Id: 48746393-52a9-46ef-82b0-1244c8c5d0f4
	I0821 11:27:00.601825 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:27:00.601841 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:27:00.601849 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:27:00.601859 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:27:00.602180 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910-m02","uid":"da2af1e0-b1f9-483d-a4e8-4b98838f7731","resourceVersion":"541","creationTimestamp":"2023-08-21T11:26:28Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I0821 11:27:00.602568 2804799 pod_ready.go:92] pod "kube-proxy-cmkk5" in "kube-system" namespace has status "Ready":"True"
	I0821 11:27:00.602583 2804799 pod_ready.go:81] duration metric: took 399.706073ms waiting for pod "kube-proxy-cmkk5" in "kube-system" namespace to be "Ready" ...
	I0821 11:27:00.602595 2804799 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-994910" in "kube-system" namespace to be "Ready" ...
	I0821 11:27:00.799070 2804799 request.go:629] Waited for 196.388805ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-994910
	I0821 11:27:00.799149 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-994910
	I0821 11:27:00.799161 2804799 round_trippers.go:469] Request Headers:
	I0821 11:27:00.799171 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:27:00.799179 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:27:00.802059 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:27:00.802128 2804799 round_trippers.go:577] Response Headers:
	I0821 11:27:00.802161 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:27:00.802185 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:27:00.802207 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:27:00.802240 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:27:00 GMT
	I0821 11:27:00.802262 2804799 round_trippers.go:580]     Audit-Id: 2dfabe8a-dab3-4a1e-a142-b76f1f23496e
	I0821 11:27:00.802284 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:27:00.802430 2804799 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-994910","namespace":"kube-system","uid":"6e91ba04-2902-4d40-ab3a-1c492a5faf72","resourceVersion":"423","creationTimestamp":"2023-08-21T11:25:25Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b581863c19058988eada7e36b412ebab","kubernetes.io/config.mirror":"b581863c19058988eada7e36b412ebab","kubernetes.io/config.seen":"2023-08-21T11:25:18.581136241Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-08-21T11:25:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0821 11:27:00.999157 2804799 request.go:629] Waited for 196.307986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:27:00.999236 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-994910
	I0821 11:27:00.999245 2804799 round_trippers.go:469] Request Headers:
	I0821 11:27:00.999262 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:27:00.999274 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:27:01.003384 2804799 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0821 11:27:01.003413 2804799 round_trippers.go:577] Response Headers:
	I0821 11:27:01.003422 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:27:01.003429 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:27:01.003436 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:27:01.003442 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:27:01 GMT
	I0821 11:27:01.003449 2804799 round_trippers.go:580]     Audit-Id: 185a59c5-c6d2-428f-a9a9-28576f0c1e95
	I0821 11:27:01.003455 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:27:01.003589 2804799 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"435","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-08-21T11:25:23Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0821 11:27:01.004011 2804799 pod_ready.go:92] pod "kube-scheduler-multinode-994910" in "kube-system" namespace has status "Ready":"True"
	I0821 11:27:01.004032 2804799 pod_ready.go:81] duration metric: took 401.427392ms waiting for pod "kube-scheduler-multinode-994910" in "kube-system" namespace to be "Ready" ...
	I0821 11:27:01.004087 2804799 pod_ready.go:38] duration metric: took 1.202569899s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0821 11:27:01.004107 2804799 system_svc.go:44] waiting for kubelet service to be running ....
	I0821 11:27:01.004172 2804799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 11:27:01.017957 2804799 system_svc.go:56] duration metric: took 13.842695ms WaitForService to wait for kubelet.
	I0821 11:27:01.017981 2804799 kubeadm.go:581] duration metric: took 32.250345155s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0821 11:27:01.018000 2804799 node_conditions.go:102] verifying NodePressure condition ...
	I0821 11:27:01.198307 2804799 request.go:629] Waited for 180.237479ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0821 11:27:01.198373 2804799 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0821 11:27:01.198380 2804799 round_trippers.go:469] Request Headers:
	I0821 11:27:01.198389 2804799 round_trippers.go:473]     Accept: application/json, */*
	I0821 11:27:01.198401 2804799 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0821 11:27:01.201276 2804799 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0821 11:27:01.201306 2804799 round_trippers.go:577] Response Headers:
	I0821 11:27:01.201315 2804799 round_trippers.go:580]     Audit-Id: 7d47259e-c834-499a-860d-cb8987695bef
	I0821 11:27:01.201322 2804799 round_trippers.go:580]     Cache-Control: no-cache, private
	I0821 11:27:01.201329 2804799 round_trippers.go:580]     Content-Type: application/json
	I0821 11:27:01.201335 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: dc9ec490-4911-4bf1-9fcf-85ec46c00269
	I0821 11:27:01.201342 2804799 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 8f5aad39-ce41-46fb-9efb-d03dadf7fc0a
	I0821 11:27:01.201348 2804799 round_trippers.go:580]     Date: Mon, 21 Aug 2023 11:27:01 GMT
	I0821 11:27:01.201501 2804799 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"542"},"items":[{"metadata":{"name":"multinode-994910","uid":"dff2fff1-e407-428d-b0ae-d5b209fa6d18","resourceVersion":"435","creationTimestamp":"2023-08-21T11:25:23Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-994910","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43","minikube.k8s.io/name":"multinode-994910","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_08_21T11_25_27_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12452 chars]
	I0821 11:27:01.202166 2804799 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0821 11:27:01.202191 2804799 node_conditions.go:123] node cpu capacity is 2
	I0821 11:27:01.202202 2804799 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0821 11:27:01.202207 2804799 node_conditions.go:123] node cpu capacity is 2
	I0821 11:27:01.202217 2804799 node_conditions.go:105] duration metric: took 184.21221ms to run NodePressure ...
	I0821 11:27:01.202228 2804799 start.go:228] waiting for startup goroutines ...
	I0821 11:27:01.202253 2804799 start.go:242] writing updated cluster config ...
	I0821 11:27:01.202566 2804799 ssh_runner.go:195] Run: rm -f paused
	I0821 11:27:01.262777 2804799 start.go:600] kubectl: 1.28.0, cluster: 1.27.4 (minor skew: 1)
	I0821 11:27:01.266633 2804799 out.go:177] * Done! kubectl is now configured to use "multinode-994910" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Aug 21 11:26:11 multinode-994910 crio[893]: time="2023-08-21 11:26:11.960944446Z" level=info msg="Created container 3f1be3a6f40c65748abf44a2d4a536ca67dd3c26c144694771353c53a0e3831c: kube-system/storage-provisioner/storage-provisioner" id=331921ad-ac71-4b5a-92fe-0afe5c635146 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 21 11:26:11 multinode-994910 crio[893]: time="2023-08-21 11:26:11.961275877Z" level=info msg="Starting container: be8c1a15867113f23e95ccff42f69eaa9a39323ee376fd6ad3fb4e6bd73d66d7" id=15116e67-1303-415e-8cff-f0e734e43683 name=/runtime.v1.RuntimeService/StartContainer
	Aug 21 11:26:11 multinode-994910 crio[893]: time="2023-08-21 11:26:11.961838270Z" level=info msg="Starting container: 3f1be3a6f40c65748abf44a2d4a536ca67dd3c26c144694771353c53a0e3831c" id=74f81b7b-e1bb-4e2b-accf-1f7a71a55624 name=/runtime.v1.RuntimeService/StartContainer
	Aug 21 11:26:11 multinode-994910 crio[893]: time="2023-08-21 11:26:11.981195769Z" level=info msg="Started container" PID=1938 containerID=3f1be3a6f40c65748abf44a2d4a536ca67dd3c26c144694771353c53a0e3831c description=kube-system/storage-provisioner/storage-provisioner id=74f81b7b-e1bb-4e2b-accf-1f7a71a55624 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cd594cfb887cdf4490c147e81700bf907f9608b155dd51f780f2b16d8f9d35aa
	Aug 21 11:26:11 multinode-994910 crio[893]: time="2023-08-21 11:26:11.985239381Z" level=info msg="Started container" PID=1944 containerID=be8c1a15867113f23e95ccff42f69eaa9a39323ee376fd6ad3fb4e6bd73d66d7 description=kube-system/coredns-5d78c9869d-zj5f8/coredns id=15116e67-1303-415e-8cff-f0e734e43683 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cf614286d7f18daae5de51625ceb2a33dfd64d8709926963110463c6e335866e
	Aug 21 11:27:03 multinode-994910 crio[893]: time="2023-08-21 11:27:03.117967433Z" level=info msg="Running pod sandbox: default/busybox-67b7f59bb-zhpmt/POD" id=dd5fed6d-a3d3-4efe-bd10-a59afb9c33b1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Aug 21 11:27:03 multinode-994910 crio[893]: time="2023-08-21 11:27:03.118035264Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 21 11:27:03 multinode-994910 crio[893]: time="2023-08-21 11:27:03.139169014Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-zhpmt Namespace:default ID:bbe97577fca8328b608f6333730af5ee8d219372cc2067ef57228158495b4777 UID:a2ad5b2b-a29c-44ba-b4c1-cc4fd97cf238 NetNS:/var/run/netns/76b03297-dcd2-496c-8096-2633f9be8834 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 21 11:27:03 multinode-994910 crio[893]: time="2023-08-21 11:27:03.139214995Z" level=info msg="Adding pod default_busybox-67b7f59bb-zhpmt to CNI network \"kindnet\" (type=ptp)"
	Aug 21 11:27:03 multinode-994910 crio[893]: time="2023-08-21 11:27:03.148479788Z" level=info msg="Got pod network &{Name:busybox-67b7f59bb-zhpmt Namespace:default ID:bbe97577fca8328b608f6333730af5ee8d219372cc2067ef57228158495b4777 UID:a2ad5b2b-a29c-44ba-b4c1-cc4fd97cf238 NetNS:/var/run/netns/76b03297-dcd2-496c-8096-2633f9be8834 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Aug 21 11:27:03 multinode-994910 crio[893]: time="2023-08-21 11:27:03.148641048Z" level=info msg="Checking pod default_busybox-67b7f59bb-zhpmt for CNI network kindnet (type=ptp)"
	Aug 21 11:27:03 multinode-994910 crio[893]: time="2023-08-21 11:27:03.167872682Z" level=info msg="Ran pod sandbox bbe97577fca8328b608f6333730af5ee8d219372cc2067ef57228158495b4777 with infra container: default/busybox-67b7f59bb-zhpmt/POD" id=dd5fed6d-a3d3-4efe-bd10-a59afb9c33b1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Aug 21 11:27:03 multinode-994910 crio[893]: time="2023-08-21 11:27:03.170672044Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=fd08066c-15db-49a9-a1a3-d30a997670f6 name=/runtime.v1.ImageService/ImageStatus
	Aug 21 11:27:03 multinode-994910 crio[893]: time="2023-08-21 11:27:03.170895457Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=fd08066c-15db-49a9-a1a3-d30a997670f6 name=/runtime.v1.ImageService/ImageStatus
	Aug 21 11:27:03 multinode-994910 crio[893]: time="2023-08-21 11:27:03.174207211Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=01bbedac-e401-4863-be95-2a8daf33077e name=/runtime.v1.ImageService/PullImage
	Aug 21 11:27:03 multinode-994910 crio[893]: time="2023-08-21 11:27:03.175411322Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Aug 21 11:27:03 multinode-994910 crio[893]: time="2023-08-21 11:27:03.864677692Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Aug 21 11:27:05 multinode-994910 crio[893]: time="2023-08-21 11:27:05.073696136Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=01bbedac-e401-4863-be95-2a8daf33077e name=/runtime.v1.ImageService/PullImage
	Aug 21 11:27:05 multinode-994910 crio[893]: time="2023-08-21 11:27:05.077207770Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=fda06195-6623-4edf-a32a-f24e1b40ad2f name=/runtime.v1.ImageService/ImageStatus
	Aug 21 11:27:05 multinode-994910 crio[893]: time="2023-08-21 11:27:05.078218548Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=fda06195-6623-4edf-a32a-f24e1b40ad2f name=/runtime.v1.ImageService/ImageStatus
	Aug 21 11:27:05 multinode-994910 crio[893]: time="2023-08-21 11:27:05.079181573Z" level=info msg="Creating container: default/busybox-67b7f59bb-zhpmt/busybox" id=c13d1658-2f59-469b-89fe-590d96949d36 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 21 11:27:05 multinode-994910 crio[893]: time="2023-08-21 11:27:05.079281674Z" level=warning msg="Allowed annotations are specified for workload []"
	Aug 21 11:27:05 multinode-994910 crio[893]: time="2023-08-21 11:27:05.167601471Z" level=info msg="Created container c171e596417c9c0df011c1cf9790efe8eec67059c2f6298787b87d975cee2a4b: default/busybox-67b7f59bb-zhpmt/busybox" id=c13d1658-2f59-469b-89fe-590d96949d36 name=/runtime.v1.RuntimeService/CreateContainer
	Aug 21 11:27:05 multinode-994910 crio[893]: time="2023-08-21 11:27:05.168426431Z" level=info msg="Starting container: c171e596417c9c0df011c1cf9790efe8eec67059c2f6298787b87d975cee2a4b" id=3eb3d4af-de4d-498f-8dc9-9b36dad1995c name=/runtime.v1.RuntimeService/StartContainer
	Aug 21 11:27:05 multinode-994910 crio[893]: time="2023-08-21 11:27:05.180462655Z" level=info msg="Started container" PID=2086 containerID=c171e596417c9c0df011c1cf9790efe8eec67059c2f6298787b87d975cee2a4b description=default/busybox-67b7f59bb-zhpmt/busybox id=3eb3d4af-de4d-498f-8dc9-9b36dad1995c name=/runtime.v1.RuntimeService/StartContainer sandboxID=bbe97577fca8328b608f6333730af5ee8d219372cc2067ef57228158495b4777
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c171e596417c9       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   bbe97577fca83       busybox-67b7f59bb-zhpmt
	be8c1a1586711       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      58 seconds ago       Running             coredns                   0                   cf614286d7f18       coredns-5d78c9869d-zj5f8
	3f1be3a6f40c6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      58 seconds ago       Running             storage-provisioner       0                   cd594cfb887cd       storage-provisioner
	d963205ea6c10       532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317                                      About a minute ago   Running             kube-proxy                0                   d70836267a876       kube-proxy-758dj
	b5ffae545601a       b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79                                      About a minute ago   Running             kindnet-cni               0                   8eb5972d56cdd       kindnet-vmb94
	95e8b8fb237ac       6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085                                      About a minute ago   Running             kube-scheduler            0                   4f51585910dc3       kube-scheduler-multinode-994910
	03559917ca751       64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388                                      About a minute ago   Running             kube-apiserver            0                   91bffca1559ec       kube-apiserver-multinode-994910
	dc7690f986d61       24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737                                      About a minute ago   Running             etcd                      0                   7639bea312761       etcd-multinode-994910
	7fc6a940d6702       389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2                                      About a minute ago   Running             kube-controller-manager   0                   f5c1ce6aca166       kube-controller-manager-multinode-994910
	
	* 
	* ==> coredns [be8c1a15867113f23e95ccff42f69eaa9a39323ee376fd6ad3fb4e6bd73d66d7] <==
	* [INFO] 10.244.1.2:34644 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135874s
	[INFO] 10.244.0.3:37680 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000098066s
	[INFO] 10.244.0.3:53861 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001203176s
	[INFO] 10.244.0.3:48893 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000076125s
	[INFO] 10.244.0.3:44332 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000044143s
	[INFO] 10.244.0.3:56202 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00083954s
	[INFO] 10.244.0.3:56113 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000070423s
	[INFO] 10.244.0.3:57811 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000054965s
	[INFO] 10.244.0.3:39829 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000389s
	[INFO] 10.244.1.2:51115 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000404686s
	[INFO] 10.244.1.2:42701 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000085553s
	[INFO] 10.244.1.2:38389 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000085627s
	[INFO] 10.244.1.2:49337 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000062997s
	[INFO] 10.244.0.3:55730 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000105877s
	[INFO] 10.244.0.3:59216 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075986s
	[INFO] 10.244.0.3:34937 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072163s
	[INFO] 10.244.0.3:42516 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059609s
	[INFO] 10.244.1.2:45627 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000996s
	[INFO] 10.244.1.2:38473 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000166126s
	[INFO] 10.244.1.2:59052 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000127505s
	[INFO] 10.244.1.2:49497 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00021077s
	[INFO] 10.244.0.3:59948 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001044s
	[INFO] 10.244.0.3:47167 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089106s
	[INFO] 10.244.0.3:36051 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000088384s
	[INFO] 10.244.0.3:58809 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000065237s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-994910
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-994910
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f7aa7ee8733269de9a8f53e8b965ffa82ed4a43
	                    minikube.k8s.io/name=multinode-994910
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_08_21T11_25_27_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 21 Aug 2023 11:25:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-994910
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 21 Aug 2023 11:27:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 21 Aug 2023 11:26:11 +0000   Mon, 21 Aug 2023 11:25:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 21 Aug 2023 11:26:11 +0000   Mon, 21 Aug 2023 11:25:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 21 Aug 2023 11:26:11 +0000   Mon, 21 Aug 2023 11:25:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 21 Aug 2023 11:26:11 +0000   Mon, 21 Aug 2023 11:26:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-994910
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 98f949912cce4eebb2d2f8967ccf7520
	  System UUID:                1f03ac89-a430-4d08-a22d-7e7ea8a8df3d
	  Boot ID:                    02e315f4-a354-4b0b-b564-f929fd2e643c
	  Kernel Version:             5.15.0-1041-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-zhpmt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5d78c9869d-zj5f8                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     90s
	  kube-system                 etcd-multinode-994910                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         104s
	  kube-system                 kindnet-vmb94                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      90s
	  kube-system                 kube-apiserver-multinode-994910             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-controller-manager-multinode-994910    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-proxy-758dj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kube-system                 kube-scheduler-multinode-994910             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  112s (x8 over 112s)  kubelet          Node multinode-994910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s (x8 over 112s)  kubelet          Node multinode-994910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s (x8 over 112s)  kubelet          Node multinode-994910 status is now: NodeHasSufficientPID
	  Normal  Starting                 104s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  104s                 kubelet          Node multinode-994910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s                 kubelet          Node multinode-994910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s                 kubelet          Node multinode-994910 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           91s                  node-controller  Node multinode-994910 event: Registered Node multinode-994910 in Controller
	  Normal  NodeReady                59s                  kubelet          Node multinode-994910 status is now: NodeReady
	
	
	Name:               multinode-994910-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-994910-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 21 Aug 2023 11:26:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-994910-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 21 Aug 2023 11:27:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 21 Aug 2023 11:26:59 +0000   Mon, 21 Aug 2023 11:26:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 21 Aug 2023 11:26:59 +0000   Mon, 21 Aug 2023 11:26:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 21 Aug 2023 11:26:59 +0000   Mon, 21 Aug 2023 11:26:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 21 Aug 2023 11:26:59 +0000   Mon, 21 Aug 2023 11:26:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-994910-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022564Ki
	  pods:               110
	System Info:
	  Machine ID:                 f71c8c9bec664a14a14990543e33d518
	  System UUID:                65d258ad-6b00-460d-8163-7f991861c46b
	  Boot ID:                    02e315f4-a354-4b0b-b564-f929fd2e643c
	  Kernel Version:             5.15.0-1041-aws
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.27.4
	  Kube-Proxy Version:         v1.27.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-46dlp    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-tg99c              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      42s
	  kube-system                 kube-proxy-cmkk5           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  NodeHasSufficientMemory  42s (x5 over 44s)  kubelet          Node multinode-994910-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x5 over 44s)  kubelet          Node multinode-994910-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x5 over 44s)  kubelet          Node multinode-994910-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node multinode-994910-m02 event: Registered Node multinode-994910-m02 in Controller
	  Normal  NodeReady                11s                kubelet          Node multinode-994910-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001023] FS-Cache: O-key=[8] '9a4b5c0100000000'
	[  +0.000699] FS-Cache: N-cookie c=000000d2 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000916] FS-Cache: N-cookie d=00000000128a3fc5{9p.inode} n=00000000cd9d496d
	[  +0.001054] FS-Cache: N-key=[8] '9a4b5c0100000000'
	[  +0.002483] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=000000cc [p=000000c9 fl=226 nc=0 na=1]
	[  +0.000977] FS-Cache: O-cookie d=00000000128a3fc5{9p.inode} n=00000000aedceb4a
	[  +0.001055] FS-Cache: O-key=[8] '9a4b5c0100000000'
	[  +0.000715] FS-Cache: N-cookie c=000000d3 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000930] FS-Cache: N-cookie d=00000000128a3fc5{9p.inode} n=000000000adb5282
	[  +0.001063] FS-Cache: N-key=[8] '9a4b5c0100000000'
	[  +3.434482] FS-Cache: Duplicate cookie detected
	[  +0.000767] FS-Cache: O-cookie c=000000ca [p=000000c9 fl=226 nc=0 na=1]
	[  +0.000971] FS-Cache: O-cookie d=00000000128a3fc5{9p.inode} n=000000007708d8a6
	[  +0.001033] FS-Cache: O-key=[8] '994b5c0100000000'
	[  +0.000696] FS-Cache: N-cookie c=000000d5 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000936] FS-Cache: N-cookie d=00000000128a3fc5{9p.inode} n=000000006b8e342c
	[  +0.001064] FS-Cache: N-key=[8] '994b5c0100000000'
	[  +0.475929] FS-Cache: Duplicate cookie detected
	[  +0.000712] FS-Cache: O-cookie c=000000cf [p=000000c9 fl=226 nc=0 na=1]
	[  +0.000952] FS-Cache: O-cookie d=00000000128a3fc5{9p.inode} n=00000000939f1609
	[  +0.001035] FS-Cache: O-key=[8] '9f4b5c0100000000'
	[  +0.000728] FS-Cache: N-cookie c=000000d6 [p=000000c9 fl=2 nc=0 na=1]
	[  +0.000926] FS-Cache: N-cookie d=00000000128a3fc5{9p.inode} n=00000000fa5b2717
	[  +0.001032] FS-Cache: N-key=[8] '9f4b5c0100000000'
	
	* 
	* ==> etcd [dc7690f986d61119f58b44e43927c20ba8a141823eb255fe5999f2d4e3828e86] <==
	* {"level":"info","ts":"2023-08-21T11:25:19.530Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-08-21T11:25:19.534Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-08-21T11:25:19.542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-08-21T11:25:19.542Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-08-21T11:25:19.542Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-08-21T11:25:19.542Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-08-21T11:25:19.542Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-08-21T11:25:20.501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-08-21T11:25:20.502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-08-21T11:25:20.502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-08-21T11:25:20.502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-08-21T11:25:20.502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-08-21T11:25:20.502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-08-21T11:25:20.502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-08-21T11:25:20.506Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-994910 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-08-21T11:25:20.506Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T11:25:20.507Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-08-21T11:25:20.509Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-08-21T11:25:20.509Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T11:25:20.511Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-08-21T11:25:20.533Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-08-21T11:25:20.533Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-08-21T11:25:20.534Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T11:25:20.534Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-08-21T11:25:20.534Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  11:27:10 up 20:09,  0 users,  load average: 0.98, 1.63, 1.77
	Linux multinode-994910 5.15.0-1041-aws #46~20.04.1-Ubuntu SMP Wed Jul 19 15:39:29 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [b5ffae545601ac878f100be451c9c4ef69daf4b2e34579a5e2f2547266b34f03] <==
	* I0821 11:25:40.899567       1 main.go:116] setting mtu 1500 for CNI 
	I0821 11:25:40.899579       1 main.go:146] kindnetd IP family: "ipv4"
	I0821 11:25:40.899588       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0821 11:26:11.262235       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0821 11:26:11.276047       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0821 11:26:11.276076       1 main.go:227] handling current node
	I0821 11:26:21.293589       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0821 11:26:21.293622       1 main.go:227] handling current node
	I0821 11:26:31.304813       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0821 11:26:31.304839       1 main.go:227] handling current node
	I0821 11:26:31.304849       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0821 11:26:31.304854       1 main.go:250] Node multinode-994910-m02 has CIDR [10.244.1.0/24] 
	I0821 11:26:31.304976       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0821 11:26:41.309807       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0821 11:26:41.309837       1 main.go:227] handling current node
	I0821 11:26:41.309847       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0821 11:26:41.309853       1 main.go:250] Node multinode-994910-m02 has CIDR [10.244.1.0/24] 
	I0821 11:26:51.321869       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0821 11:26:51.322015       1 main.go:227] handling current node
	I0821 11:26:51.322042       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0821 11:26:51.322049       1 main.go:250] Node multinode-994910-m02 has CIDR [10.244.1.0/24] 
	I0821 11:27:01.333946       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0821 11:27:01.334223       1 main.go:227] handling current node
	I0821 11:27:01.334272       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0821 11:27:01.334308       1 main.go:250] Node multinode-994910-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [03559917ca751a7c1690b8ba39297628d6e72894d48c15fc4c11b1c73864ca57] <==
	* I0821 11:25:23.277527       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0821 11:25:23.353437       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0821 11:25:23.356028       1 shared_informer.go:318] Caches are synced for configmaps
	I0821 11:25:23.356509       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0821 11:25:23.357532       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0821 11:25:23.357969       1 controller.go:624] quota admission added evaluator for: namespaces
	I0821 11:25:23.557747       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0821 11:25:23.753695       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0821 11:25:24.080694       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0821 11:25:24.086728       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0821 11:25:24.086754       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0821 11:25:24.584744       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0821 11:25:24.626603       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0821 11:25:24.706086       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0821 11:25:24.712517       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0821 11:25:24.713529       1 controller.go:624] quota admission added evaluator for: endpoints
	I0821 11:25:24.719753       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0821 11:25:25.206798       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0821 11:25:26.517157       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0821 11:25:26.530456       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0821 11:25:26.542944       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0821 11:25:40.030672       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0821 11:25:40.080676       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E0821 11:27:07.238107       1 upgradeaware.go:426] Error proxying data from client to backend: write tcp 192.168.58.2:57408->192.168.58.2:10250: write: broken pipe
	E0821 11:27:07.694047       1 upgradeaware.go:440] Error proxying data from backend to client: write tcp 192.168.58.2:8443->192.168.58.1:52254: write: broken pipe
	
	* 
	* ==> kube-controller-manager [7fc6a940d67023d9172d56924fc3ec71f68ba80515bbe772539f0e938d6f99d7] <==
	* I0821 11:25:39.358723       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-multinode-994910" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0821 11:25:39.387894       1 shared_informer.go:318] Caches are synced for resource quota
	I0821 11:25:39.389092       1 shared_informer.go:318] Caches are synced for resource quota
	I0821 11:25:39.779173       1 shared_informer.go:318] Caches are synced for garbage collector
	I0821 11:25:39.779212       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0821 11:25:39.832531       1 shared_informer.go:318] Caches are synced for garbage collector
	I0821 11:25:40.038988       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0821 11:25:40.106219       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vmb94"
	I0821 11:25:40.114876       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-758dj"
	I0821 11:25:40.317057       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-l74tc"
	I0821 11:25:40.347998       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-zj5f8"
	I0821 11:25:40.380216       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0821 11:25:40.825349       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-l74tc"
	I0821 11:26:14.327178       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0821 11:26:28.070748       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-994910-m02\" does not exist"
	I0821 11:26:28.098147       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-994910-m02" podCIDRs=[10.244.1.0/24]
	I0821 11:26:28.103609       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-tg99c"
	I0821 11:26:28.106833       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-cmkk5"
	I0821 11:26:29.329080       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-994910-m02"
	I0821 11:26:29.329145       1 event.go:307] "Event occurred" object="multinode-994910-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-994910-m02 event: Registered Node multinode-994910-m02 in Controller"
	W0821 11:26:59.510929       1 topologycache.go:232] Can't get CPU or zone information for multinode-994910-m02 node
	I0821 11:27:02.139575       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0821 11:27:02.164785       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-46dlp"
	I0821 11:27:02.193356       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-zhpmt"
	I0821 11:27:04.350078       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-46dlp" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-46dlp"
	
	* 
	* ==> kube-proxy [d963205ea6c1085e7d96f8a56a2cd8d224b7ee17d69c220f222b9dd310fd12fd] <==
	* I0821 11:25:41.295291       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0821 11:25:41.295392       1 server_others.go:110] "Detected node IP" address="192.168.58.2"
	I0821 11:25:41.295410       1 server_others.go:554] "Using iptables proxy"
	I0821 11:25:41.428023       1 server_others.go:192] "Using iptables Proxier"
	I0821 11:25:41.428065       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0821 11:25:41.428074       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0821 11:25:41.428086       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0821 11:25:41.428148       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0821 11:25:41.428693       1 server.go:658] "Version info" version="v1.27.4"
	I0821 11:25:41.428716       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0821 11:25:41.430252       1 config.go:188] "Starting service config controller"
	I0821 11:25:41.430272       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0821 11:25:41.430292       1 config.go:97] "Starting endpoint slice config controller"
	I0821 11:25:41.430297       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0821 11:25:41.430698       1 config.go:315] "Starting node config controller"
	I0821 11:25:41.432770       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0821 11:25:41.531269       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0821 11:25:41.531279       1 shared_informer.go:318] Caches are synced for service config
	I0821 11:25:41.533444       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [95e8b8fb237acab79ace6b84a8c67c5b2e3fcb4d54d79a7590c2f77668905c80] <==
	* W0821 11:25:23.271619       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0821 11:25:23.272634       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0821 11:25:23.272676       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0821 11:25:23.272727       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0821 11:25:23.286179       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 11:25:23.286894       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0821 11:25:24.138336       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0821 11:25:24.138475       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0821 11:25:24.164438       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0821 11:25:24.164485       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0821 11:25:24.213241       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0821 11:25:24.213281       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0821 11:25:24.259837       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0821 11:25:24.259964       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0821 11:25:24.273758       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0821 11:25:24.273793       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0821 11:25:24.305775       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0821 11:25:24.305811       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0821 11:25:24.349898       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0821 11:25:24.349955       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0821 11:25:24.400986       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0821 11:25:24.401042       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0821 11:25:24.488892       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0821 11:25:24.488922       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0821 11:25:27.342325       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Aug 21 11:25:40 multinode-994910 kubelet[1384]: I0821 11:25:40.214066    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2232edb-23d3-4789-86a0-9e3cd68aeea3-xtables-lock\") pod \"kube-proxy-758dj\" (UID: \"f2232edb-23d3-4789-86a0-9e3cd68aeea3\") " pod="kube-system/kube-proxy-758dj"
	Aug 21 11:25:40 multinode-994910 kubelet[1384]: I0821 11:25:40.214127    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5257j\" (UniqueName: \"kubernetes.io/projected/f2232edb-23d3-4789-86a0-9e3cd68aeea3-kube-api-access-5257j\") pod \"kube-proxy-758dj\" (UID: \"f2232edb-23d3-4789-86a0-9e3cd68aeea3\") " pod="kube-system/kube-proxy-758dj"
	Aug 21 11:25:40 multinode-994910 kubelet[1384]: I0821 11:25:40.214156    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f2232edb-23d3-4789-86a0-9e3cd68aeea3-kube-proxy\") pod \"kube-proxy-758dj\" (UID: \"f2232edb-23d3-4789-86a0-9e3cd68aeea3\") " pod="kube-system/kube-proxy-758dj"
	Aug 21 11:25:40 multinode-994910 kubelet[1384]: I0821 11:25:40.214183    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85d5ad45-2643-4c1a-898c-b92c6d4c313d-xtables-lock\") pod \"kindnet-vmb94\" (UID: \"85d5ad45-2643-4c1a-898c-b92c6d4c313d\") " pod="kube-system/kindnet-vmb94"
	Aug 21 11:25:40 multinode-994910 kubelet[1384]: I0821 11:25:40.214209    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtcsw\" (UniqueName: \"kubernetes.io/projected/85d5ad45-2643-4c1a-898c-b92c6d4c313d-kube-api-access-mtcsw\") pod \"kindnet-vmb94\" (UID: \"85d5ad45-2643-4c1a-898c-b92c6d4c313d\") " pod="kube-system/kindnet-vmb94"
	Aug 21 11:25:40 multinode-994910 kubelet[1384]: I0821 11:25:40.214236    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/85d5ad45-2643-4c1a-898c-b92c6d4c313d-cni-cfg\") pod \"kindnet-vmb94\" (UID: \"85d5ad45-2643-4c1a-898c-b92c6d4c313d\") " pod="kube-system/kindnet-vmb94"
	Aug 21 11:25:40 multinode-994910 kubelet[1384]: I0821 11:25:40.214261    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85d5ad45-2643-4c1a-898c-b92c6d4c313d-lib-modules\") pod \"kindnet-vmb94\" (UID: \"85d5ad45-2643-4c1a-898c-b92c6d4c313d\") " pod="kube-system/kindnet-vmb94"
	Aug 21 11:25:40 multinode-994910 kubelet[1384]: I0821 11:25:40.214283    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2232edb-23d3-4789-86a0-9e3cd68aeea3-lib-modules\") pod \"kube-proxy-758dj\" (UID: \"f2232edb-23d3-4789-86a0-9e3cd68aeea3\") " pod="kube-system/kube-proxy-758dj"
	Aug 21 11:25:40 multinode-994910 kubelet[1384]: W0821 11:25:40.479230    1384 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/044a79616bc979dbd0194b96cc19bbb9942147959a549722da18d30526d96040/crio-d70836267a87634da82451fd2ccbf76e036c8e66095e5fbb01d90f0dd9ec012a WatchSource:0}: Error finding container d70836267a87634da82451fd2ccbf76e036c8e66095e5fbb01d90f0dd9ec012a: Status 404 returned error can't find the container with id d70836267a87634da82451fd2ccbf76e036c8e66095e5fbb01d90f0dd9ec012a
	Aug 21 11:25:41 multinode-994910 kubelet[1384]: I0821 11:25:41.714060    1384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-758dj" podStartSLOduration=1.7140200540000001 podCreationTimestamp="2023-08-21 11:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-21 11:25:41.71341177 +0000 UTC m=+15.230547153" watchObservedRunningTime="2023-08-21 11:25:41.714020054 +0000 UTC m=+15.231155436"
	Aug 21 11:25:46 multinode-994910 kubelet[1384]: I0821 11:25:46.630242    1384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-vmb94" podStartSLOduration=6.63019743 podCreationTimestamp="2023-08-21 11:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-21 11:25:41.728349131 +0000 UTC m=+15.245484497" watchObservedRunningTime="2023-08-21 11:25:46.63019743 +0000 UTC m=+20.147332804"
	Aug 21 11:26:11 multinode-994910 kubelet[1384]: I0821 11:26:11.459922    1384 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Aug 21 11:26:11 multinode-994910 kubelet[1384]: I0821 11:26:11.486845    1384 topology_manager.go:212] "Topology Admit Handler"
	Aug 21 11:26:11 multinode-994910 kubelet[1384]: I0821 11:26:11.491164    1384 topology_manager.go:212] "Topology Admit Handler"
	Aug 21 11:26:11 multinode-994910 kubelet[1384]: I0821 11:26:11.560846    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdm2m\" (UniqueName: \"kubernetes.io/projected/b6aeac2c-fd47-4855-8a60-675aa03078a6-kube-api-access-vdm2m\") pod \"coredns-5d78c9869d-zj5f8\" (UID: \"b6aeac2c-fd47-4855-8a60-675aa03078a6\") " pod="kube-system/coredns-5d78c9869d-zj5f8"
	Aug 21 11:26:11 multinode-994910 kubelet[1384]: I0821 11:26:11.560901    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/66ef6e75-74a3-4384-8e70-dccc09707589-tmp\") pod \"storage-provisioner\" (UID: \"66ef6e75-74a3-4384-8e70-dccc09707589\") " pod="kube-system/storage-provisioner"
	Aug 21 11:26:11 multinode-994910 kubelet[1384]: I0821 11:26:11.560928    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6aeac2c-fd47-4855-8a60-675aa03078a6-config-volume\") pod \"coredns-5d78c9869d-zj5f8\" (UID: \"b6aeac2c-fd47-4855-8a60-675aa03078a6\") " pod="kube-system/coredns-5d78c9869d-zj5f8"
	Aug 21 11:26:11 multinode-994910 kubelet[1384]: I0821 11:26:11.560957    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9mzf\" (UniqueName: \"kubernetes.io/projected/66ef6e75-74a3-4384-8e70-dccc09707589-kube-api-access-t9mzf\") pod \"storage-provisioner\" (UID: \"66ef6e75-74a3-4384-8e70-dccc09707589\") " pod="kube-system/storage-provisioner"
	Aug 21 11:26:11 multinode-994910 kubelet[1384]: W0821 11:26:11.845334    1384 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/044a79616bc979dbd0194b96cc19bbb9942147959a549722da18d30526d96040/crio-cf614286d7f18daae5de51625ceb2a33dfd64d8709926963110463c6e335866e WatchSource:0}: Error finding container cf614286d7f18daae5de51625ceb2a33dfd64d8709926963110463c6e335866e: Status 404 returned error can't find the container with id cf614286d7f18daae5de51625ceb2a33dfd64d8709926963110463c6e335866e
	Aug 21 11:26:12 multinode-994910 kubelet[1384]: I0821 11:26:12.782078    1384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-zj5f8" podStartSLOduration=32.782036718 podCreationTimestamp="2023-08-21 11:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-21 11:26:12.765657777 +0000 UTC m=+46.282793143" watchObservedRunningTime="2023-08-21 11:26:12.782036718 +0000 UTC m=+46.299172092"
	Aug 21 11:26:12 multinode-994910 kubelet[1384]: I0821 11:26:12.795171    1384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.795129865 podCreationTimestamp="2023-08-21 11:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-08-21 11:26:12.782519449 +0000 UTC m=+46.299654814" watchObservedRunningTime="2023-08-21 11:26:12.795129865 +0000 UTC m=+46.312265239"
	Aug 21 11:27:02 multinode-994910 kubelet[1384]: I0821 11:27:02.216337    1384 topology_manager.go:212] "Topology Admit Handler"
	Aug 21 11:27:02 multinode-994910 kubelet[1384]: W0821 11:27:02.223572    1384 reflector.go:533] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-994910" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-994910' and this object
	Aug 21 11:27:02 multinode-994910 kubelet[1384]: E0821 11:27:02.223624    1384 reflector.go:148] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-994910" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-994910' and this object
	Aug 21 11:27:02 multinode-994910 kubelet[1384]: I0821 11:27:02.266803    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt8zn\" (UniqueName: \"kubernetes.io/projected/a2ad5b2b-a29c-44ba-b4c1-cc4fd97cf238-kube-api-access-dt8zn\") pod \"busybox-67b7f59bb-zhpmt\" (UID: \"a2ad5b2b-a29c-44ba-b4c1-cc4fd97cf238\") " pod="default/busybox-67b7f59bb-zhpmt"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-994910 -n multinode-994910
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-994910 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (71.68s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /tmp/minikube-v1.17.0.874117000.exe start -p running-upgrade-549783 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:132: (dbg) Done: /tmp/minikube-v1.17.0.874117000.exe start -p running-upgrade-549783 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m1.749018818s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-549783 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0821 11:42:39.857185 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-549783 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (4.918359103s)

                                                
                                                
-- stdout --
	* [running-upgrade-549783] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-549783 in cluster running-upgrade-549783
	* Pulling base image ...
	* Updating the running docker "running-upgrade-549783" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 11:42:38.180400 2866237 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:42:38.180572 2866237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:42:38.180600 2866237 out.go:309] Setting ErrFile to fd 2...
	I0821 11:42:38.180620 2866237 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:42:38.180916 2866237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
	I0821 11:42:38.181292 2866237 out.go:303] Setting JSON to false
	I0821 11:42:38.182712 2866237 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":73502,"bootTime":1692544656,"procs":423,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0821 11:42:38.182816 2866237 start.go:138] virtualization:  
	I0821 11:42:38.185584 2866237 out.go:177] * [running-upgrade-549783] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0821 11:42:38.188267 2866237 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 11:42:38.188397 2866237 notify.go:220] Checking for updates...
	I0821 11:42:38.197981 2866237 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 11:42:38.200598 2866237 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:42:38.202342 2866237 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	I0821 11:42:38.204237 2866237 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0821 11:42:38.206163 2866237 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 11:42:38.208440 2866237 config.go:182] Loaded profile config "running-upgrade-549783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0821 11:42:38.210782 2866237 out.go:177] * Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	I0821 11:42:38.212693 2866237 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 11:42:38.243717 2866237 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 11:42:38.243891 2866237 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:42:38.334195 2866237 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:54 SystemTime:2023-08-21 11:42:38.321790251 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:42:38.334310 2866237 docker.go:294] overlay module found
	I0821 11:42:38.337475 2866237 out.go:177] * Using the docker driver based on existing profile
	I0821 11:42:38.339433 2866237 start.go:298] selected driver: docker
	I0821 11:42:38.339453 2866237 start.go:902] validating driver "docker" against &{Name:running-upgrade-549783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-549783 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.152 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:42:38.339556 2866237 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 11:42:38.340217 2866237 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:42:38.418504 2866237 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:54 SystemTime:2023-08-21 11:42:38.408428515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:42:38.418912 2866237 cni.go:84] Creating CNI manager for ""
	I0821 11:42:38.418929 2866237 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 11:42:38.418942 2866237 start_flags.go:319] config:
	{Name:running-upgrade-549783 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-549783 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.152 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:42:38.421401 2866237 out.go:177] * Starting control plane node running-upgrade-549783 in cluster running-upgrade-549783
	I0821 11:42:38.423370 2866237 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 11:42:38.425284 2866237 out.go:177] * Pulling base image ...
	I0821 11:42:38.426887 2866237 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0821 11:42:38.427020 2866237 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0821 11:42:38.449727 2866237 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0821 11:42:38.449750 2866237 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0821 11:42:38.501026 2866237 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0821 11:42:38.501189 2866237 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/running-upgrade-549783/config.json ...
	I0821 11:42:38.501457 2866237 cache.go:195] Successfully downloaded all kic artifacts
	I0821 11:42:38.501503 2866237 start.go:365] acquiring machines lock for running-upgrade-549783: {Name:mkfaec941c2cc756c303bcf2a3f5af711ceabbb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:42:38.501568 2866237 start.go:369] acquired machines lock for "running-upgrade-549783" in 36.488µs
	I0821 11:42:38.501586 2866237 start.go:96] Skipping create...Using existing machine configuration
	I0821 11:42:38.501594 2866237 fix.go:54] fixHost starting: 
	I0821 11:42:38.501857 2866237 cli_runner.go:164] Run: docker container inspect running-upgrade-549783 --format={{.State.Status}}
	I0821 11:42:38.502036 2866237 cache.go:107] acquiring lock: {Name:mk7f8a34da0b383537fabc9b6e390429eff319a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:42:38.502115 2866237 cache.go:115] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0821 11:42:38.502129 2866237 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 114.221µs
	I0821 11:42:38.502138 2866237 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0821 11:42:38.502151 2866237 cache.go:107] acquiring lock: {Name:mk4153eae7806e222b01564ed0cfe02695401b0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:42:38.502187 2866237 cache.go:115] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0821 11:42:38.502196 2866237 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 45.808µs
	I0821 11:42:38.502203 2866237 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0821 11:42:38.502215 2866237 cache.go:107] acquiring lock: {Name:mkf8e6a0246b532c0a6d1597764c57cbd46923e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:42:38.502248 2866237 cache.go:115] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0821 11:42:38.502255 2866237 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 41.623µs
	I0821 11:42:38.502263 2866237 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0821 11:42:38.502274 2866237 cache.go:107] acquiring lock: {Name:mk80067060178902457fd3cca5de63d48ad9f542 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:42:38.502319 2866237 cache.go:115] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0821 11:42:38.502330 2866237 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 56.163µs
	I0821 11:42:38.502342 2866237 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0821 11:42:38.502367 2866237 cache.go:107] acquiring lock: {Name:mk05a25396c7058d8baf370462f200fc462eb9af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:42:38.502398 2866237 cache.go:115] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0821 11:42:38.502403 2866237 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 37.53µs
	I0821 11:42:38.502417 2866237 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0821 11:42:38.502428 2866237 cache.go:107] acquiring lock: {Name:mke90e9cef34726fd360e9e46700eff141b32ef8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:42:38.502457 2866237 cache.go:115] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0821 11:42:38.502461 2866237 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 36.979µs
	I0821 11:42:38.502468 2866237 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0821 11:42:38.502476 2866237 cache.go:107] acquiring lock: {Name:mk0492acfd72012e7e6b77986a4d0f1831851535 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:42:38.502508 2866237 cache.go:115] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0821 11:42:38.502532 2866237 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 56.762µs
	I0821 11:42:38.502539 2866237 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0821 11:42:38.502546 2866237 cache.go:107] acquiring lock: {Name:mka8fc0beab33e5fbcbe22d83cad21f967d544a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:42:38.502575 2866237 cache.go:115] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0821 11:42:38.502586 2866237 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 38.449µs
	I0821 11:42:38.502593 2866237 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0821 11:42:38.502598 2866237 cache.go:87] Successfully saved all images to host disk.
	I0821 11:42:38.521863 2866237 fix.go:102] recreateIfNeeded on running-upgrade-549783: state=Running err=<nil>
	W0821 11:42:38.521927 2866237 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 11:42:38.524398 2866237 out.go:177] * Updating the running docker "running-upgrade-549783" container ...
	I0821 11:42:38.526490 2866237 machine.go:88] provisioning docker machine ...
	I0821 11:42:38.526518 2866237 ubuntu.go:169] provisioning hostname "running-upgrade-549783"
	I0821 11:42:38.526590 2866237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-549783
	I0821 11:42:38.545236 2866237 main.go:141] libmachine: Using SSH client type: native
	I0821 11:42:38.546343 2866237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36370 <nil> <nil>}
	I0821 11:42:38.546366 2866237 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-549783 && echo "running-upgrade-549783" | sudo tee /etc/hostname
	I0821 11:42:38.738309 2866237 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-549783
	
	I0821 11:42:38.738404 2866237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-549783
	I0821 11:42:38.771155 2866237 main.go:141] libmachine: Using SSH client type: native
	I0821 11:42:38.771698 2866237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36370 <nil> <nil>}
	I0821 11:42:38.771718 2866237 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-549783' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-549783/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-549783' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 11:42:38.923264 2866237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 11:42:38.923291 2866237 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17102-2734539/.minikube CaCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17102-2734539/.minikube}
	I0821 11:42:38.923316 2866237 ubuntu.go:177] setting up certificates
	I0821 11:42:38.923328 2866237 provision.go:83] configureAuth start
	I0821 11:42:38.923390 2866237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-549783
	I0821 11:42:38.961250 2866237 provision.go:138] copyHostCerts
	I0821 11:42:38.961327 2866237 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem, removing ...
	I0821 11:42:38.961359 2866237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem
	I0821 11:42:38.961448 2866237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem (1078 bytes)
	I0821 11:42:38.961553 2866237 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem, removing ...
	I0821 11:42:38.961561 2866237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem
	I0821 11:42:38.961587 2866237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem (1123 bytes)
	I0821 11:42:38.961654 2866237 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem, removing ...
	I0821 11:42:38.961660 2866237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem
	I0821 11:42:38.961688 2866237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem (1675 bytes)
	I0821 11:42:38.961746 2866237 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-549783 san=[192.168.70.152 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-549783]
	I0821 11:42:39.383937 2866237 provision.go:172] copyRemoteCerts
	I0821 11:42:39.384008 2866237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 11:42:39.384058 2866237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-549783
	I0821 11:42:39.403799 2866237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36370 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/running-upgrade-549783/id_rsa Username:docker}
	I0821 11:42:39.516113 2866237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 11:42:39.549577 2866237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0821 11:42:39.573353 2866237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0821 11:42:39.597971 2866237 provision.go:86] duration metric: configureAuth took 674.627337ms
	I0821 11:42:39.598009 2866237 ubuntu.go:193] setting minikube options for container-runtime
	I0821 11:42:39.598222 2866237 config.go:182] Loaded profile config "running-upgrade-549783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0821 11:42:39.598357 2866237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-549783
	I0821 11:42:39.616398 2866237 main.go:141] libmachine: Using SSH client type: native
	I0821 11:42:39.616960 2866237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36370 <nil> <nil>}
	I0821 11:42:39.616985 2866237 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0821 11:42:40.236600 2866237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0821 11:42:40.236674 2866237 machine.go:91] provisioned docker machine in 1.710166155s
	I0821 11:42:40.236691 2866237 start.go:300] post-start starting for "running-upgrade-549783" (driver="docker")
	I0821 11:42:40.236701 2866237 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 11:42:40.236790 2866237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 11:42:40.236844 2866237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-549783
	I0821 11:42:40.255874 2866237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36370 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/running-upgrade-549783/id_rsa Username:docker}
	I0821 11:42:40.355363 2866237 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 11:42:40.359665 2866237 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0821 11:42:40.359691 2866237 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0821 11:42:40.359705 2866237 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0821 11:42:40.359712 2866237 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0821 11:42:40.359722 2866237 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-2734539/.minikube/addons for local assets ...
	I0821 11:42:40.359787 2866237 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-2734539/.minikube/files for local assets ...
	I0821 11:42:40.359895 2866237 filesync.go:149] local asset: /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem -> 27399302.pem in /etc/ssl/certs
	I0821 11:42:40.360008 2866237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0821 11:42:40.370367 2866237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem --> /etc/ssl/certs/27399302.pem (1708 bytes)
	I0821 11:42:40.394615 2866237 start.go:303] post-start completed in 157.909037ms
	I0821 11:42:40.394712 2866237 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 11:42:40.394760 2866237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-549783
	I0821 11:42:40.413465 2866237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36370 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/running-upgrade-549783/id_rsa Username:docker}
	I0821 11:42:40.514856 2866237 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0821 11:42:40.535932 2866237 fix.go:56] fixHost completed within 2.034328778s
	I0821 11:42:40.535960 2866237 start.go:83] releasing machines lock for "running-upgrade-549783", held for 2.034378762s
	I0821 11:42:40.536040 2866237 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-549783
	I0821 11:42:40.556942 2866237 ssh_runner.go:195] Run: cat /version.json
	I0821 11:42:40.556997 2866237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-549783
	I0821 11:42:40.557046 2866237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 11:42:40.557119 2866237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-549783
	I0821 11:42:40.587386 2866237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36370 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/running-upgrade-549783/id_rsa Username:docker}
	I0821 11:42:40.596683 2866237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36370 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/running-upgrade-549783/id_rsa Username:docker}
	W0821 11:42:40.682764 2866237 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0821 11:42:40.682850 2866237 ssh_runner.go:195] Run: systemctl --version
	I0821 11:42:40.754188 2866237 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0821 11:42:40.851708 2866237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0821 11:42:40.857505 2866237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:42:40.901425 2866237 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0821 11:42:40.901508 2866237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:42:40.965851 2866237 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0821 11:42:40.965905 2866237 start.go:466] detecting cgroup driver to use...
	I0821 11:42:40.965940 2866237 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0821 11:42:40.965994 2866237 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 11:42:41.110055 2866237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 11:42:41.134690 2866237 docker.go:196] disabling cri-docker service (if available) ...
	I0821 11:42:41.134763 2866237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0821 11:42:41.149606 2866237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0821 11:42:41.196751 2866237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0821 11:42:41.228357 2866237 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0821 11:42:41.228436 2866237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0821 11:42:41.623345 2866237 docker.go:212] disabling docker service ...
	I0821 11:42:41.623427 2866237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0821 11:42:41.690398 2866237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0821 11:42:41.795186 2866237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0821 11:42:42.449716 2866237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0821 11:42:42.811924 2866237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0821 11:42:42.854405 2866237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 11:42:42.963245 2866237 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0821 11:42:42.963309 2866237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:42:43.034039 2866237 out.go:177] 
	W0821 11:42:43.035864 2866237 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0821 11:42:43.036055 2866237 out.go:239] * 
	* 
	W0821 11:42:43.037130 2866237 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 11:42:43.038341 2866237 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-549783 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-08-21 11:42:43.06659047 +0000 UTC m=+2447.500275377
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-549783
helpers_test.go:235: (dbg) docker inspect running-upgrade-549783:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "33ab225e930fbe61ff0607ab475c357f68d5b605a35b82c338806d38647c30f1",
	        "Created": "2023-08-21T11:41:48.335796683Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2860423,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-21T11:41:48.779836253Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/33ab225e930fbe61ff0607ab475c357f68d5b605a35b82c338806d38647c30f1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/33ab225e930fbe61ff0607ab475c357f68d5b605a35b82c338806d38647c30f1/hostname",
	        "HostsPath": "/var/lib/docker/containers/33ab225e930fbe61ff0607ab475c357f68d5b605a35b82c338806d38647c30f1/hosts",
	        "LogPath": "/var/lib/docker/containers/33ab225e930fbe61ff0607ab475c357f68d5b605a35b82c338806d38647c30f1/33ab225e930fbe61ff0607ab475c357f68d5b605a35b82c338806d38647c30f1-json.log",
	        "Name": "/running-upgrade-549783",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-549783:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-549783",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/24ef9f8956cbf197e30c1ea53194e5acd80af1b4fb2553893293a07f6b5a8f53-init/diff:/var/lib/docker/overlay2/dd9aa3ebacc2bd8e49180faa651efcfe3eeb7b9db1d119ddd7565a97f7c1a653/diff:/var/lib/docker/overlay2/372f589b9252b91bf24677acd194deea14f086dbc860b4e26b9f55138f26ad75/diff:/var/lib/docker/overlay2/d1bdbbb2aa0f5709103214fc0ed7b2a69bdae3eba73e9edd77d032183517ba0d/diff:/var/lib/docker/overlay2/ec760039300c00751b990318e7d4fd5653a4b38215e97057328f616006cc1cbc/diff:/var/lib/docker/overlay2/335a3895c199dee9577a9cfdca67e05bc991fed57bbe8fe45f6260b00cef28ae/diff:/var/lib/docker/overlay2/1476be91fa8a913e299286825de3bb45f489974643c957f6266dd5e95b813511/diff:/var/lib/docker/overlay2/6ce346da6e0119f196b294ce32c603c18434d76d8a0f0aae0f4d0498044c7696/diff:/var/lib/docker/overlay2/72effb1d8ea0d3eb3878ffe1f4e23e43af21f55202f3b3da584b4a59bf7bc1d9/diff:/var/lib/docker/overlay2/88d09413846eef42a622d19bcc13d1c99265a60fe7d711d18c864292c2243996/diff:/var/lib/docker/overlay2/f7a083
7009cab3264604a770078fe279dffda3ef8f2606f61d3e14e8ffa7ca69/diff:/var/lib/docker/overlay2/50715c3767249b7118651d0e8859f9a699351806692aaabe5002b23150568be3/diff:/var/lib/docker/overlay2/f5598a09723668820716b4d8a17ea216c6dcd5c46c122c1b1e4b99f9eda8ece9/diff:/var/lib/docker/overlay2/55bf8ce28f7496975fbb9c0697d320d2703fc18b43ade0a48812daf1e3749a08/diff:/var/lib/docker/overlay2/f46cfdd912a12dbd7cfb16ece3585374c8ef8afe3773b7ae2a72880bf504bf96/diff:/var/lib/docker/overlay2/5299f6035f16664c9b0425b69549f9d3738d79532eb1f325e8bb3a5996c5fff0/diff:/var/lib/docker/overlay2/cc03a7a3b778d57ec568d61366762fcaa5c193e420b67a8f0348fa647c3e07dc/diff:/var/lib/docker/overlay2/873c77481f1ecac5332b42f72029b2de9f3e35eb0a7ec15c33879dd05afd76fc/diff:/var/lib/docker/overlay2/232e9ef7fdd260f8301362df93cab1dc3200d640f2f615ec6df4ac2e5ffac0d4/diff:/var/lib/docker/overlay2/c1ef095dbc099778c87aca10ffe884df314d91e44c7908e25fd5884276e2b8bb/diff:/var/lib/docker/overlay2/68ded1a0253488d28be4b5e8e1512bd69405b122bfa02806bcd33af163c81a06/diff:/var/lib/d
ocker/overlay2/a1b83aa2cc7e82102a28b3fbfbbf56096137d8d8029656951921ffd539a263d4/diff:/var/lib/docker/overlay2/dfa842b00004aa9877284bf900cfcaadf2b8314cfe1e2e000ebfbcce54fa5f02/diff:/var/lib/docker/overlay2/7df3755261310a01beb2ccaff67345e7f3e570ea29d2524d56abb88dbfb4be3b/diff:/var/lib/docker/overlay2/12c7073241acbdace6f0d231f90928145337557075d43b1980589a07ea109e42/diff:/var/lib/docker/overlay2/6f416a2a46d0f1aadb0b42a1ce4809361c80a390799e01cdd6c3de8abb5f784c/diff:/var/lib/docker/overlay2/55871e23789764420375c9d3d430cc04824ecaf1b8a9b7ba1857beec9be8b8ab/diff:/var/lib/docker/overlay2/a1b8b4759c5d13769ed38cc0c887441e28e6936879a5fb6acfac8854c0997daa/diff:/var/lib/docker/overlay2/7ed2860a60aa12d711a7d08e035097ca860ced4bfbeee73d58d71b64e3b397a7/diff:/var/lib/docker/overlay2/1836c614b6df7333043f6b9c91acd33232b7f09bce76e738619ad26afe5ece1a/diff:/var/lib/docker/overlay2/b7831147792adaf30cc5acd4390c5b6a02235d9a7259b284ac7068fe9f336d21/diff:/var/lib/docker/overlay2/4924c669704906a86aebf9308d35252576b810605304cf81f9dd0da8fea
ce018/diff:/var/lib/docker/overlay2/d993bcf0c53386f42c2189366b29cd8cbd0fcc4c997223e6a05c0092e43c4e77/diff:/var/lib/docker/overlay2/811aaddef9dda72dc55015f2339945a960195c242c516793600e539a438dd859/diff:/var/lib/docker/overlay2/c800e64cfad3619e218a4a15af05c46166a9f45b4d71c9c7e3cd62ea10f03c87/diff",
	                "MergedDir": "/var/lib/docker/overlay2/24ef9f8956cbf197e30c1ea53194e5acd80af1b4fb2553893293a07f6b5a8f53/merged",
	                "UpperDir": "/var/lib/docker/overlay2/24ef9f8956cbf197e30c1ea53194e5acd80af1b4fb2553893293a07f6b5a8f53/diff",
	                "WorkDir": "/var/lib/docker/overlay2/24ef9f8956cbf197e30c1ea53194e5acd80af1b4fb2553893293a07f6b5a8f53/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-549783",
	                "Source": "/var/lib/docker/volumes/running-upgrade-549783/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-549783",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-549783",
	                "name.minikube.sigs.k8s.io": "running-upgrade-549783",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0437406466af23959787acbbba365e08bf491e6404d725e0337d2b0df931046a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36370"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36369"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36368"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36367"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0437406466af",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-549783": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.152"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "33ab225e930f",
	                        "running-upgrade-549783"
	                    ],
	                    "NetworkID": "5e0aa6cd34d0c8c4e8d4ca02fa6a5d6deaf1d612b108554bc1b27358abc2905c",
	                    "EndpointID": "b725cc49d98a03272098adcd1c688906057d437e6e21630c4818699a67ccbcf0",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.152",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:98",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-549783 -n running-upgrade-549783
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-549783 -n running-upgrade-549783: exit status 4 (875.236868ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0821 11:42:43.800219 2866940 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-549783" does not appear in /home/jenkins/minikube-integration/17102-2734539/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-549783" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-549783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-549783
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-549783: (3.138926696s)
--- FAIL: TestRunningBinaryUpgrade (71.68s)

                                                
                                    
x
+
TestMissingContainerUpgrade (145.63s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /tmp/minikube-v1.17.0.877180673.exe start -p missing-upgrade-344332 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:321: (dbg) Done: /tmp/minikube-v1.17.0.877180673.exe start -p missing-upgrade-344332 --memory=2200 --driver=docker  --container-runtime=crio: (1m35.279561822s)
version_upgrade_test.go:330: (dbg) Run:  docker stop missing-upgrade-344332
version_upgrade_test.go:330: (dbg) Done: docker stop missing-upgrade-344332: (10.401061746s)
version_upgrade_test.go:335: (dbg) Run:  docker rm missing-upgrade-344332
version_upgrade_test.go:341: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-344332 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:341: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-344332 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (35.859743271s)

                                                
                                                
-- stdout --
	* [missing-upgrade-344332] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-344332 in cluster missing-upgrade-344332
	* Pulling base image ...
	* docker "missing-upgrade-344332" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 11:39:51.478873 2850908 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:39:51.479101 2850908 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:39:51.479127 2850908 out.go:309] Setting ErrFile to fd 2...
	I0821 11:39:51.479146 2850908 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:39:51.479411 2850908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
	I0821 11:39:51.479836 2850908 out.go:303] Setting JSON to false
	I0821 11:39:51.481026 2850908 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":73335,"bootTime":1692544656,"procs":383,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0821 11:39:51.481131 2850908 start.go:138] virtualization:  
	I0821 11:39:51.484966 2850908 out.go:177] * [missing-upgrade-344332] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0821 11:39:51.487670 2850908 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 11:39:51.489727 2850908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 11:39:51.487765 2850908 notify.go:220] Checking for updates...
	I0821 11:39:51.491682 2850908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:39:51.493678 2850908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	I0821 11:39:51.495283 2850908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0821 11:39:51.497135 2850908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 11:39:51.500038 2850908 config.go:182] Loaded profile config "missing-upgrade-344332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0821 11:39:51.506903 2850908 out.go:177] * Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	I0821 11:39:51.508786 2850908 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 11:39:51.538717 2850908 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 11:39:51.538815 2850908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:39:51.668935 2850908 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:53 SystemTime:2023-08-21 11:39:51.658415473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:39:51.669044 2850908 docker.go:294] overlay module found
	I0821 11:39:51.671428 2850908 out.go:177] * Using the docker driver based on existing profile
	I0821 11:39:51.673313 2850908 start.go:298] selected driver: docker
	I0821 11:39:51.673326 2850908 start.go:902] validating driver "docker" against &{Name:missing-upgrade-344332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-344332 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.13 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:39:51.673432 2850908 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 11:39:51.674048 2850908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:39:51.780519 2850908 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:53 SystemTime:2023-08-21 11:39:51.763827296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:39:51.780810 2850908 cni.go:84] Creating CNI manager for ""
	I0821 11:39:51.780818 2850908 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 11:39:51.780828 2850908 start_flags.go:319] config:
	{Name:missing-upgrade-344332 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-344332 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.13 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:39:51.783185 2850908 out.go:177] * Starting control plane node missing-upgrade-344332 in cluster missing-upgrade-344332
	I0821 11:39:51.785152 2850908 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 11:39:51.787362 2850908 out.go:177] * Pulling base image ...
	I0821 11:39:51.789290 2850908 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0821 11:39:51.789455 2850908 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0821 11:39:51.819166 2850908 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I0821 11:39:51.819688 2850908 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I0821 11:39:51.820170 2850908 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W0821 11:39:51.857746 2850908 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0821 11:39:51.857936 2850908 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/missing-upgrade-344332/config.json ...
	I0821 11:39:51.858733 2850908 cache.go:107] acquiring lock: {Name:mk05a25396c7058d8baf370462f200fc462eb9af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:39:51.858888 2850908 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I0821 11:39:51.859264 2850908 cache.go:107] acquiring lock: {Name:mk4153eae7806e222b01564ed0cfe02695401b0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:39:51.859370 2850908 cache.go:107] acquiring lock: {Name:mk7f8a34da0b383537fabc9b6e390429eff319a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:39:51.859445 2850908 cache.go:107] acquiring lock: {Name:mkf8e6a0246b532c0a6d1597764c57cbd46923e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:39:51.859527 2850908 cache.go:107] acquiring lock: {Name:mke90e9cef34726fd360e9e46700eff141b32ef8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:39:51.859695 2850908 cache.go:115] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0821 11:39:51.859708 2850908 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 341.236µs
	I0821 11:39:51.859718 2850908 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0821 11:39:51.859731 2850908 cache.go:107] acquiring lock: {Name:mk0492acfd72012e7e6b77986a4d0f1831851535 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:39:51.859816 2850908 cache.go:107] acquiring lock: {Name:mk80067060178902457fd3cca5de63d48ad9f542 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:39:51.861104 2850908 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I0821 11:39:51.859883 2850908 cache.go:107] acquiring lock: {Name:mka8fc0beab33e5fbcbe22d83cad21f967d544a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:39:51.859924 2850908 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I0821 11:39:51.860346 2850908 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0821 11:39:51.860368 2850908 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0821 11:39:51.860861 2850908 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0821 11:39:51.861025 2850908 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I0821 11:39:51.865165 2850908 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0821 11:39:51.865667 2850908 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0821 11:39:51.865858 2850908 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I0821 11:39:51.865991 2850908 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0821 11:39:51.866638 2850908 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0821 11:39:51.867728 2850908 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0821 11:39:51.868443 2850908 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	W0821 11:39:52.266915 2850908 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I0821 11:39:52.267006 2850908 cache.go:162] opening:  /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	I0821 11:39:52.279657 2850908 cache.go:162] opening:  /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0821 11:39:52.290658 2850908 cache.go:162] opening:  /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	I0821 11:39:52.307764 2850908 cache.go:162] opening:  /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	I0821 11:39:52.323286 2850908 cache.go:162] opening:  /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	W0821 11:39:52.323863 2850908 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I0821 11:39:52.323905 2850908 cache.go:162] opening:  /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	W0821 11:39:52.326513 2850908 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I0821 11:39:52.326580 2850908 cache.go:162] opening:  /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I0821 11:39:52.416972 2850908 cache.go:157] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0821 11:39:52.417001 2850908 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 557.475094ms
	I0821 11:39:52.417013 2850908 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  305.35 KiB / 287.99 MiB [] 0.10% ? p/s ?I0821 11:39:52.830188 2850908 cache.go:157] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0821 11:39:52.830263 2850908 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 970.379742ms
	I0821 11:39:52.830289 2850908 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0821 11:39:52.894271 2850908 cache.go:157] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0821 11:39:52.894354 2850908 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 1.034528015s
	I0821 11:39:52.894383 2850908 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  5.77 MiB / 287.99 MiB [>_] 2.00% ? p/s ?I0821 11:39:53.135092 2850908 cache.go:157] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0821 11:39:53.135264 2850908 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.275817912s
	I0821 11:39:53.135316 2850908 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  18.70 MiB / 287.99 MiB  6.49% 30.85 MiB I0821 11:39:53.221423 2850908 cache.go:157] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0821 11:39:53.221496 2850908 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.36223588s
	I0821 11:39:53.221525 2850908 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 30.85 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 30.85 MiB I0821 11:39:53.676325 2850908 cache.go:157] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0821 11:39:53.676396 2850908 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 1.81767035s
	I0821 11:39:53.676422 2850908 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 29.64 MiB     > gcr.io/k8s-minikube/kicbase...:  32.46 MiB / 287.99 MiB  11.27% 29.64 MiB    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 29.64 MiB    > gcr.io/k8s-minikube/kicbase...:  48.20 MiB / 287.99 MiB  16.74% 30.12 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 30.12 MiB    > gcr.io/k8s-minikube/kicbase...:  71.88 MiB / 287.99 MiB  24.96% 30.12 MiBI0821 11:39:54.830517 2850908 cache.go:157] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0821 11:39:54.830555 2850908 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 2.970823858s
	I0821 11:39:54.830573 2850908 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0821 11:39:54.830583 2850908 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  93.25 MiB / 287.99 MiB  32.38% 33.03 MiB    > gcr.io/k8s-minikube/kicbase...:  115.52 MiB / 287.99 MiB  40.11% 33.03 Mi    > gcr.io/k8s-minikube/kicbase...:  139.03 MiB / 287.99 MiB  48.28% 33.03 Mi    > gcr.io/k8s-minikube/kicbase...:  162.97 MiB / 287.99 MiB  56.59% 38.39 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 38.39 Mi    > gcr.io/k8s-minikube/kicbase...:  188.43 MiB / 287.99 MiB  65.43% 38.39 Mi    > gcr.io/k8s-minikube/kicbase...:  205.83 MiB / 287.99 MiB  71.47% 40.52 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 40.52 Mi    > gcr.io/k8s-minikube/kicbase...:  217.68 MiB / 287.99 MiB  75.59% 40.52 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 41.37 Mi    > gcr.io/k8s-minikube/kicbase...:  249.15 MiB / 287.99 MiB  86.51% 41.37 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 41.37 Mi    > gcr.io/k8s-minikube/kicbase...:  280.26 MiB / 287.99 MiB  97.
31% 43.24 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 43.24 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 43.24 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 41.28 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 41.28 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 41.28 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 38.62 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 38.62 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 38.62 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 44.97 MI0821 11:39:58.933899 2850908 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I0821 11:39:58.933926 2850908 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I0821 11:39:59.131471 2850908 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I0821 11:39:59.131504 2850908 cache.go:195] Successfully downloaded all kic artifacts
	I0821 11:39:59.131564 2850908 start.go:365] acquiring machines lock for missing-upgrade-344332: {Name:mk6b8b1d0a67e110b2964484eca8732e0346703c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:39:59.131626 2850908 start.go:369] acquired machines lock for "missing-upgrade-344332" in 43.092µs
	I0821 11:39:59.131647 2850908 start.go:96] Skipping create...Using existing machine configuration
	I0821 11:39:59.131652 2850908 fix.go:54] fixHost starting: 
	I0821 11:39:59.131931 2850908 cli_runner.go:164] Run: docker container inspect missing-upgrade-344332 --format={{.State.Status}}
	W0821 11:39:59.164244 2850908 cli_runner.go:211] docker container inspect missing-upgrade-344332 --format={{.State.Status}} returned with exit code 1
	I0821 11:39:59.164300 2850908 fix.go:102] recreateIfNeeded on missing-upgrade-344332: state= err=unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:39:59.164318 2850908 fix.go:107] machineExists: false. err=machine does not exist
	I0821 11:39:59.168102 2850908 out.go:177] * docker "missing-upgrade-344332" container is missing, will recreate.
	I0821 11:39:59.169917 2850908 delete.go:124] DEMOLISHING missing-upgrade-344332 ...
	I0821 11:39:59.170012 2850908 cli_runner.go:164] Run: docker container inspect missing-upgrade-344332 --format={{.State.Status}}
	W0821 11:39:59.188877 2850908 cli_runner.go:211] docker container inspect missing-upgrade-344332 --format={{.State.Status}} returned with exit code 1
	W0821 11:39:59.188942 2850908 stop.go:75] unable to get state: unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:39:59.188965 2850908 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:39:59.189452 2850908 cli_runner.go:164] Run: docker container inspect missing-upgrade-344332 --format={{.State.Status}}
	W0821 11:39:59.228140 2850908 cli_runner.go:211] docker container inspect missing-upgrade-344332 --format={{.State.Status}} returned with exit code 1
	I0821 11:39:59.228202 2850908 delete.go:82] Unable to get host status for missing-upgrade-344332, assuming it has already been deleted: state: unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:39:59.228277 2850908 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-344332
	W0821 11:39:59.249081 2850908 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-344332 returned with exit code 1
	I0821 11:39:59.249123 2850908 kic.go:367] could not find the container missing-upgrade-344332 to remove it. will try anyways
	I0821 11:39:59.249181 2850908 cli_runner.go:164] Run: docker container inspect missing-upgrade-344332 --format={{.State.Status}}
	W0821 11:39:59.265941 2850908 cli_runner.go:211] docker container inspect missing-upgrade-344332 --format={{.State.Status}} returned with exit code 1
	W0821 11:39:59.265995 2850908 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:39:59.266054 2850908 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-344332 /bin/bash -c "sudo init 0"
	W0821 11:39:59.292505 2850908 cli_runner.go:211] docker exec --privileged -t missing-upgrade-344332 /bin/bash -c "sudo init 0" returned with exit code 1
	I0821 11:39:59.292534 2850908 oci.go:647] error shutdown missing-upgrade-344332: docker exec --privileged -t missing-upgrade-344332 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:40:00.292755 2850908 cli_runner.go:164] Run: docker container inspect missing-upgrade-344332 --format={{.State.Status}}
	W0821 11:40:00.319785 2850908 cli_runner.go:211] docker container inspect missing-upgrade-344332 --format={{.State.Status}} returned with exit code 1
	I0821 11:40:00.319866 2850908 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:40:00.319891 2850908 oci.go:661] temporary error: container missing-upgrade-344332 status is  but expect it to be exited
	I0821 11:40:00.319925 2850908 retry.go:31] will retry after 549.970312ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:40:00.870698 2850908 cli_runner.go:164] Run: docker container inspect missing-upgrade-344332 --format={{.State.Status}}
	W0821 11:40:00.906955 2850908 cli_runner.go:211] docker container inspect missing-upgrade-344332 --format={{.State.Status}} returned with exit code 1
	I0821 11:40:00.907016 2850908 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:40:00.907026 2850908 oci.go:661] temporary error: container missing-upgrade-344332 status is  but expect it to be exited
	I0821 11:40:00.907049 2850908 retry.go:31] will retry after 675.196472ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:40:01.582676 2850908 cli_runner.go:164] Run: docker container inspect missing-upgrade-344332 --format={{.State.Status}}
	W0821 11:40:01.600817 2850908 cli_runner.go:211] docker container inspect missing-upgrade-344332 --format={{.State.Status}} returned with exit code 1
	I0821 11:40:01.600879 2850908 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:40:01.600889 2850908 oci.go:661] temporary error: container missing-upgrade-344332 status is  but expect it to be exited
	I0821 11:40:01.600912 2850908 retry.go:31] will retry after 992.571485ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:40:02.594007 2850908 cli_runner.go:164] Run: docker container inspect missing-upgrade-344332 --format={{.State.Status}}
	W0821 11:40:02.616917 2850908 cli_runner.go:211] docker container inspect missing-upgrade-344332 --format={{.State.Status}} returned with exit code 1
	I0821 11:40:02.616977 2850908 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:40:02.616988 2850908 oci.go:661] temporary error: container missing-upgrade-344332 status is  but expect it to be exited
	I0821 11:40:02.617012 2850908 retry.go:31] will retry after 1.039835978s: couldn't verify container is exited. %v: unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:40:03.657055 2850908 cli_runner.go:164] Run: docker container inspect missing-upgrade-344332 --format={{.State.Status}}
	W0821 11:40:03.677702 2850908 cli_runner.go:211] docker container inspect missing-upgrade-344332 --format={{.State.Status}} returned with exit code 1
	I0821 11:40:03.677759 2850908 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:40:03.677775 2850908 oci.go:661] temporary error: container missing-upgrade-344332 status is  but expect it to be exited
	I0821 11:40:03.677799 2850908 retry.go:31] will retry after 2.985408843s: couldn't verify container is exited. %v: unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:40:06.663417 2850908 cli_runner.go:164] Run: docker container inspect missing-upgrade-344332 --format={{.State.Status}}
	W0821 11:40:06.681571 2850908 cli_runner.go:211] docker container inspect missing-upgrade-344332 --format={{.State.Status}} returned with exit code 1
	I0821 11:40:06.681647 2850908 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:40:06.681661 2850908 oci.go:661] temporary error: container missing-upgrade-344332 status is  but expect it to be exited
	I0821 11:40:06.681694 2850908 retry.go:31] will retry after 2.151125881s: couldn't verify container is exited. %v: unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:40:08.834013 2850908 cli_runner.go:164] Run: docker container inspect missing-upgrade-344332 --format={{.State.Status}}
	W0821 11:40:08.854497 2850908 cli_runner.go:211] docker container inspect missing-upgrade-344332 --format={{.State.Status}} returned with exit code 1
	I0821 11:40:08.854559 2850908 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:40:08.854570 2850908 oci.go:661] temporary error: container missing-upgrade-344332 status is  but expect it to be exited
	I0821 11:40:08.854595 2850908 retry.go:31] will retry after 6.693131242s: couldn't verify container is exited. %v: unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:40:15.548768 2850908 cli_runner.go:164] Run: docker container inspect missing-upgrade-344332 --format={{.State.Status}}
	W0821 11:40:15.566837 2850908 cli_runner.go:211] docker container inspect missing-upgrade-344332 --format={{.State.Status}} returned with exit code 1
	I0821 11:40:15.566900 2850908 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	I0821 11:40:15.566913 2850908 oci.go:661] temporary error: container missing-upgrade-344332 status is  but expect it to be exited
	I0821 11:40:15.566952 2850908 oci.go:88] couldn't shut down missing-upgrade-344332 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-344332": docker container inspect missing-upgrade-344332 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-344332
	 
	I0821 11:40:15.567020 2850908 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-344332
	I0821 11:40:15.582914 2850908 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-344332
	W0821 11:40:15.598563 2850908 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-344332 returned with exit code 1
	I0821 11:40:15.598668 2850908 cli_runner.go:164] Run: docker network inspect missing-upgrade-344332 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0821 11:40:15.615459 2850908 cli_runner.go:164] Run: docker network rm missing-upgrade-344332
	I0821 11:40:15.723533 2850908 fix.go:114] Sleeping 1 second for extra luck!
	I0821 11:40:16.724519 2850908 start.go:125] createHost starting for "" (driver="docker")
	I0821 11:40:16.726923 2850908 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0821 11:40:16.727079 2850908 start.go:159] libmachine.API.Create for "missing-upgrade-344332" (driver="docker")
	I0821 11:40:16.727105 2850908 client.go:168] LocalClient.Create starting
	I0821 11:40:16.727180 2850908 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem
	I0821 11:40:16.727219 2850908 main.go:141] libmachine: Decoding PEM data...
	I0821 11:40:16.727236 2850908 main.go:141] libmachine: Parsing certificate...
	I0821 11:40:16.727294 2850908 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem
	I0821 11:40:16.727320 2850908 main.go:141] libmachine: Decoding PEM data...
	I0821 11:40:16.727338 2850908 main.go:141] libmachine: Parsing certificate...
	I0821 11:40:16.727604 2850908 cli_runner.go:164] Run: docker network inspect missing-upgrade-344332 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0821 11:40:16.744259 2850908 cli_runner.go:211] docker network inspect missing-upgrade-344332 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0821 11:40:16.744355 2850908 network_create.go:281] running [docker network inspect missing-upgrade-344332] to gather additional debugging logs...
	I0821 11:40:16.744377 2850908 cli_runner.go:164] Run: docker network inspect missing-upgrade-344332
	W0821 11:40:16.764252 2850908 cli_runner.go:211] docker network inspect missing-upgrade-344332 returned with exit code 1
	I0821 11:40:16.764285 2850908 network_create.go:284] error running [docker network inspect missing-upgrade-344332]: docker network inspect missing-upgrade-344332: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-344332 not found
	I0821 11:40:16.764316 2850908 network_create.go:286] output of [docker network inspect missing-upgrade-344332]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-344332 not found
	
	** /stderr **
	I0821 11:40:16.764384 2850908 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0821 11:40:16.782431 2850908 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b94741280122 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:51:cd:3e:84} reservation:<nil>}
	I0821 11:40:16.782800 2850908 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-27268fd9dec2 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:a7:04:ab:d7} reservation:<nil>}
	I0821 11:40:16.783146 2850908 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-716f1ef5a633 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:58:b8:91:4c} reservation:<nil>}
	I0821 11:40:16.783628 2850908 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001dd7330}
	I0821 11:40:16.783650 2850908 network_create.go:123] attempt to create docker network missing-upgrade-344332 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0821 11:40:16.783711 2850908 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-344332 missing-upgrade-344332
	I0821 11:40:16.855181 2850908 network_create.go:107] docker network missing-upgrade-344332 192.168.76.0/24 created
	I0821 11:40:16.855225 2850908 kic.go:117] calculated static IP "192.168.76.2" for the "missing-upgrade-344332" container
	I0821 11:40:16.855306 2850908 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0821 11:40:16.871846 2850908 cli_runner.go:164] Run: docker volume create missing-upgrade-344332 --label name.minikube.sigs.k8s.io=missing-upgrade-344332 --label created_by.minikube.sigs.k8s.io=true
	I0821 11:40:16.888951 2850908 oci.go:103] Successfully created a docker volume missing-upgrade-344332
	I0821 11:40:16.889049 2850908 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-344332-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-344332 --entrypoint /usr/bin/test -v missing-upgrade-344332:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I0821 11:40:18.545476 2850908 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-344332-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-344332 --entrypoint /usr/bin/test -v missing-upgrade-344332:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib: (1.65637464s)
	I0821 11:40:18.545511 2850908 oci.go:107] Successfully prepared a docker volume missing-upgrade-344332
	I0821 11:40:18.545534 2850908 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W0821 11:40:18.545681 2850908 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0821 11:40:18.545800 2850908 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0821 11:40:18.615824 2850908 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-344332 --name missing-upgrade-344332 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-344332 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-344332 --network missing-upgrade-344332 --ip 192.168.76.2 --volume missing-upgrade-344332:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I0821 11:40:19.027210 2850908 cli_runner.go:164] Run: docker container inspect missing-upgrade-344332 --format={{.State.Running}}
	I0821 11:40:19.053469 2850908 cli_runner.go:164] Run: docker container inspect missing-upgrade-344332 --format={{.State.Status}}
	I0821 11:40:19.088411 2850908 cli_runner.go:164] Run: docker exec missing-upgrade-344332 stat /var/lib/dpkg/alternatives/iptables
	I0821 11:40:19.189219 2850908 oci.go:144] the created container "missing-upgrade-344332" has a running status.
	I0821 11:40:19.189244 2850908 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/missing-upgrade-344332/id_rsa...
	I0821 11:40:19.457523 2850908 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/missing-upgrade-344332/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0821 11:40:19.501768 2850908 cli_runner.go:164] Run: docker container inspect missing-upgrade-344332 --format={{.State.Status}}
	I0821 11:40:19.531929 2850908 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0821 11:40:19.531952 2850908 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-344332 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0821 11:40:19.663597 2850908 cli_runner.go:164] Run: docker container inspect missing-upgrade-344332 --format={{.State.Status}}
	I0821 11:40:19.700737 2850908 machine.go:88] provisioning docker machine ...
	I0821 11:40:19.700767 2850908 ubuntu.go:169] provisioning hostname "missing-upgrade-344332"
	I0821 11:40:19.700837 2850908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-344332
	I0821 11:40:19.738637 2850908 main.go:141] libmachine: Using SSH client type: native
	I0821 11:40:19.739166 2850908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36357 <nil> <nil>}
	I0821 11:40:19.739185 2850908 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-344332 && echo "missing-upgrade-344332" | sudo tee /etc/hostname
	I0821 11:40:19.740073 2850908 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0821 11:40:22.905519 2850908 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-344332
	
	I0821 11:40:22.905607 2850908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-344332
	I0821 11:40:22.927873 2850908 main.go:141] libmachine: Using SSH client type: native
	I0821 11:40:22.928319 2850908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36357 <nil> <nil>}
	I0821 11:40:22.928344 2850908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-344332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-344332/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-344332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 11:40:23.079383 2850908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 11:40:23.079463 2850908 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17102-2734539/.minikube CaCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17102-2734539/.minikube}
	I0821 11:40:23.079525 2850908 ubuntu.go:177] setting up certificates
	I0821 11:40:23.079552 2850908 provision.go:83] configureAuth start
	I0821 11:40:23.079642 2850908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-344332
	I0821 11:40:23.098787 2850908 provision.go:138] copyHostCerts
	I0821 11:40:23.098853 2850908 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem, removing ...
	I0821 11:40:23.098870 2850908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem
	I0821 11:40:23.098946 2850908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem (1078 bytes)
	I0821 11:40:23.099046 2850908 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem, removing ...
	I0821 11:40:23.099056 2850908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem
	I0821 11:40:23.099086 2850908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem (1123 bytes)
	I0821 11:40:23.099467 2850908 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem, removing ...
	I0821 11:40:23.099482 2850908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem
	I0821 11:40:23.099518 2850908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem (1675 bytes)
	I0821 11:40:23.099585 2850908 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-344332 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-344332]
	I0821 11:40:23.611634 2850908 provision.go:172] copyRemoteCerts
	I0821 11:40:23.611705 2850908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 11:40:23.611746 2850908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-344332
	I0821 11:40:23.639130 2850908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36357 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/missing-upgrade-344332/id_rsa Username:docker}
	I0821 11:40:23.743183 2850908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 11:40:23.767817 2850908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0821 11:40:23.792246 2850908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0821 11:40:23.814789 2850908 provision.go:86] duration metric: configureAuth took 735.211529ms
	I0821 11:40:23.814825 2850908 ubuntu.go:193] setting minikube options for container-runtime
	I0821 11:40:23.815022 2850908 config.go:182] Loaded profile config "missing-upgrade-344332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0821 11:40:23.815135 2850908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-344332
	I0821 11:40:23.835424 2850908 main.go:141] libmachine: Using SSH client type: native
	I0821 11:40:23.835864 2850908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36357 <nil> <nil>}
	I0821 11:40:23.835886 2850908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0821 11:40:24.262206 2850908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0821 11:40:24.262277 2850908 machine.go:91] provisioned docker machine in 4.56151972s
	I0821 11:40:24.262300 2850908 client.go:171] LocalClient.Create took 7.535190119s
	I0821 11:40:24.262343 2850908 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-344332" took 7.535263956s
	I0821 11:40:24.262368 2850908 start.go:300] post-start starting for "missing-upgrade-344332" (driver="docker")
	I0821 11:40:24.262391 2850908 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 11:40:24.262480 2850908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 11:40:24.262541 2850908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-344332
	I0821 11:40:24.291214 2850908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36357 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/missing-upgrade-344332/id_rsa Username:docker}
	I0821 11:40:24.391310 2850908 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 11:40:24.395182 2850908 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0821 11:40:24.395212 2850908 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0821 11:40:24.395223 2850908 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0821 11:40:24.395231 2850908 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0821 11:40:24.395241 2850908 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-2734539/.minikube/addons for local assets ...
	I0821 11:40:24.395303 2850908 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-2734539/.minikube/files for local assets ...
	I0821 11:40:24.395386 2850908 filesync.go:149] local asset: /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem -> 27399302.pem in /etc/ssl/certs
	I0821 11:40:24.395492 2850908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0821 11:40:24.404783 2850908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem --> /etc/ssl/certs/27399302.pem (1708 bytes)
	I0821 11:40:24.427523 2850908 start.go:303] post-start completed in 165.127512ms
	I0821 11:40:24.427884 2850908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-344332
	I0821 11:40:24.448055 2850908 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/missing-upgrade-344332/config.json ...
	I0821 11:40:24.448338 2850908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 11:40:24.448390 2850908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-344332
	I0821 11:40:24.466317 2850908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36357 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/missing-upgrade-344332/id_rsa Username:docker}
	I0821 11:40:24.570112 2850908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0821 11:40:24.575408 2850908 start.go:128] duration metric: createHost completed in 7.850835939s
	I0821 11:40:24.575578 2850908 cli_runner.go:164] Run: docker container inspect missing-upgrade-344332 --format={{.State.Status}}
	W0821 11:40:24.594931 2850908 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 11:40:24.594956 2850908 machine.go:88] provisioning docker machine ...
	I0821 11:40:24.594973 2850908 ubuntu.go:169] provisioning hostname "missing-upgrade-344332"
	I0821 11:40:24.595039 2850908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-344332
	I0821 11:40:24.613682 2850908 main.go:141] libmachine: Using SSH client type: native
	I0821 11:40:24.614223 2850908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36357 <nil> <nil>}
	I0821 11:40:24.614243 2850908 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-344332 && echo "missing-upgrade-344332" | sudo tee /etc/hostname
	I0821 11:40:24.769323 2850908 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-344332
	
	I0821 11:40:24.769419 2850908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-344332
	I0821 11:40:24.793615 2850908 main.go:141] libmachine: Using SSH client type: native
	I0821 11:40:24.794127 2850908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36357 <nil> <nil>}
	I0821 11:40:24.794154 2850908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-344332' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-344332/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-344332' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 11:40:24.946031 2850908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 11:40:24.946054 2850908 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17102-2734539/.minikube CaCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17102-2734539/.minikube}
	I0821 11:40:24.946079 2850908 ubuntu.go:177] setting up certificates
	I0821 11:40:24.946087 2850908 provision.go:83] configureAuth start
	I0821 11:40:24.946150 2850908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-344332
	I0821 11:40:24.965483 2850908 provision.go:138] copyHostCerts
	I0821 11:40:24.965559 2850908 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem, removing ...
	I0821 11:40:24.965573 2850908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem
	I0821 11:40:24.965667 2850908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem (1078 bytes)
	I0821 11:40:24.965772 2850908 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem, removing ...
	I0821 11:40:24.965780 2850908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem
	I0821 11:40:24.965807 2850908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem (1123 bytes)
	I0821 11:40:24.965893 2850908 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem, removing ...
	I0821 11:40:24.965903 2850908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem
	I0821 11:40:24.965929 2850908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem (1675 bytes)
	I0821 11:40:24.965992 2850908 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-344332 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-344332]
	I0821 11:40:25.098603 2850908 provision.go:172] copyRemoteCerts
	I0821 11:40:25.098684 2850908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 11:40:25.098739 2850908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-344332
	I0821 11:40:25.120159 2850908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36357 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/missing-upgrade-344332/id_rsa Username:docker}
	I0821 11:40:25.240085 2850908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 11:40:25.272838 2850908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0821 11:40:25.300822 2850908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0821 11:40:25.330043 2850908 provision.go:86] duration metric: configureAuth took 383.942343ms
	I0821 11:40:25.330067 2850908 ubuntu.go:193] setting minikube options for container-runtime
	I0821 11:40:25.330268 2850908 config.go:182] Loaded profile config "missing-upgrade-344332": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0821 11:40:25.330388 2850908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-344332
	I0821 11:40:25.351002 2850908 main.go:141] libmachine: Using SSH client type: native
	I0821 11:40:25.351463 2850908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36357 <nil> <nil>}
	I0821 11:40:25.351485 2850908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0821 11:40:25.699030 2850908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0821 11:40:25.699050 2850908 machine.go:91] provisioned docker machine in 1.104085953s
	I0821 11:40:25.699061 2850908 start.go:300] post-start starting for "missing-upgrade-344332" (driver="docker")
	I0821 11:40:25.699073 2850908 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 11:40:25.699137 2850908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 11:40:25.699182 2850908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-344332
	I0821 11:40:25.726620 2850908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36357 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/missing-upgrade-344332/id_rsa Username:docker}
	I0821 11:40:25.827703 2850908 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 11:40:25.831931 2850908 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0821 11:40:25.831958 2850908 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0821 11:40:25.831971 2850908 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0821 11:40:25.831977 2850908 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0821 11:40:25.831987 2850908 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-2734539/.minikube/addons for local assets ...
	I0821 11:40:25.832044 2850908 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-2734539/.minikube/files for local assets ...
	I0821 11:40:25.832124 2850908 filesync.go:149] local asset: /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem -> 27399302.pem in /etc/ssl/certs
	I0821 11:40:25.832234 2850908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0821 11:40:25.842005 2850908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem --> /etc/ssl/certs/27399302.pem (1708 bytes)
	I0821 11:40:25.870776 2850908 start.go:303] post-start completed in 171.699344ms
	I0821 11:40:25.870862 2850908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 11:40:25.870911 2850908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-344332
	I0821 11:40:25.892128 2850908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36357 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/missing-upgrade-344332/id_rsa Username:docker}
	I0821 11:40:25.987677 2850908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0821 11:40:25.993127 2850908 fix.go:56] fixHost completed within 26.86146522s
	I0821 11:40:25.993150 2850908 start.go:83] releasing machines lock for "missing-upgrade-344332", held for 26.861516165s
	I0821 11:40:25.993231 2850908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-344332
	I0821 11:40:26.011380 2850908 ssh_runner.go:195] Run: cat /version.json
	I0821 11:40:26.011401 2850908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 11:40:26.011436 2850908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-344332
	I0821 11:40:26.011462 2850908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-344332
	I0821 11:40:26.037178 2850908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36357 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/missing-upgrade-344332/id_rsa Username:docker}
	I0821 11:40:26.037813 2850908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36357 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/missing-upgrade-344332/id_rsa Username:docker}
	W0821 11:40:26.257902 2850908 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0821 11:40:26.257984 2850908 ssh_runner.go:195] Run: systemctl --version
	I0821 11:40:26.264312 2850908 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0821 11:40:26.404905 2850908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0821 11:40:26.412636 2850908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:40:26.446873 2850908 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0821 11:40:26.446990 2850908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:40:26.509375 2850908 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0821 11:40:26.509401 2850908 start.go:466] detecting cgroup driver to use...
	I0821 11:40:26.509455 2850908 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0821 11:40:26.509517 2850908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 11:40:26.539939 2850908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 11:40:26.553240 2850908 docker.go:196] disabling cri-docker service (if available) ...
	I0821 11:40:26.553327 2850908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0821 11:40:26.566854 2850908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0821 11:40:26.579104 2850908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0821 11:40:26.593474 2850908 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0821 11:40:26.593564 2850908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0821 11:40:26.780215 2850908 docker.go:212] disabling docker service ...
	I0821 11:40:26.780295 2850908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0821 11:40:26.804121 2850908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0821 11:40:26.820572 2850908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0821 11:40:27.025278 2850908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0821 11:40:27.195444 2850908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0821 11:40:27.213195 2850908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 11:40:27.237870 2850908 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0821 11:40:27.237984 2850908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:40:27.255772 2850908 out.go:177] 
	W0821 11:40:27.258039 2850908 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0821 11:40:27.258062 2850908 out.go:239] * 
	* 
	W0821 11:40:27.259122 2850908 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 11:40:27.260677 2850908 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:343: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-344332 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:345: *** TestMissingContainerUpgrade FAILED at 2023-08-21 11:40:27.32887986 +0000 UTC m=+2311.762564767
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-344332
helpers_test.go:235: (dbg) docker inspect missing-upgrade-344332:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "11195c640a5139d628bc5ed2735fedade7d509503bddd74c8e362e230874d28d",
	        "Created": "2023-08-21T11:40:18.634214848Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2851781,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-08-21T11:40:19.018866321Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/11195c640a5139d628bc5ed2735fedade7d509503bddd74c8e362e230874d28d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/11195c640a5139d628bc5ed2735fedade7d509503bddd74c8e362e230874d28d/hostname",
	        "HostsPath": "/var/lib/docker/containers/11195c640a5139d628bc5ed2735fedade7d509503bddd74c8e362e230874d28d/hosts",
	        "LogPath": "/var/lib/docker/containers/11195c640a5139d628bc5ed2735fedade7d509503bddd74c8e362e230874d28d/11195c640a5139d628bc5ed2735fedade7d509503bddd74c8e362e230874d28d-json.log",
	        "Name": "/missing-upgrade-344332",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-344332:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-344332",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/977b0be5387da43fd501f6136b614bfa07e79bc8a09aff3fac3e6d81269d00e8-init/diff:/var/lib/docker/overlay2/dd9aa3ebacc2bd8e49180faa651efcfe3eeb7b9db1d119ddd7565a97f7c1a653/diff:/var/lib/docker/overlay2/372f589b9252b91bf24677acd194deea14f086dbc860b4e26b9f55138f26ad75/diff:/var/lib/docker/overlay2/d1bdbbb2aa0f5709103214fc0ed7b2a69bdae3eba73e9edd77d032183517ba0d/diff:/var/lib/docker/overlay2/ec760039300c00751b990318e7d4fd5653a4b38215e97057328f616006cc1cbc/diff:/var/lib/docker/overlay2/335a3895c199dee9577a9cfdca67e05bc991fed57bbe8fe45f6260b00cef28ae/diff:/var/lib/docker/overlay2/1476be91fa8a913e299286825de3bb45f489974643c957f6266dd5e95b813511/diff:/var/lib/docker/overlay2/6ce346da6e0119f196b294ce32c603c18434d76d8a0f0aae0f4d0498044c7696/diff:/var/lib/docker/overlay2/72effb1d8ea0d3eb3878ffe1f4e23e43af21f55202f3b3da584b4a59bf7bc1d9/diff:/var/lib/docker/overlay2/88d09413846eef42a622d19bcc13d1c99265a60fe7d711d18c864292c2243996/diff:/var/lib/docker/overlay2/f7a083
7009cab3264604a770078fe279dffda3ef8f2606f61d3e14e8ffa7ca69/diff:/var/lib/docker/overlay2/50715c3767249b7118651d0e8859f9a699351806692aaabe5002b23150568be3/diff:/var/lib/docker/overlay2/f5598a09723668820716b4d8a17ea216c6dcd5c46c122c1b1e4b99f9eda8ece9/diff:/var/lib/docker/overlay2/55bf8ce28f7496975fbb9c0697d320d2703fc18b43ade0a48812daf1e3749a08/diff:/var/lib/docker/overlay2/f46cfdd912a12dbd7cfb16ece3585374c8ef8afe3773b7ae2a72880bf504bf96/diff:/var/lib/docker/overlay2/5299f6035f16664c9b0425b69549f9d3738d79532eb1f325e8bb3a5996c5fff0/diff:/var/lib/docker/overlay2/cc03a7a3b778d57ec568d61366762fcaa5c193e420b67a8f0348fa647c3e07dc/diff:/var/lib/docker/overlay2/873c77481f1ecac5332b42f72029b2de9f3e35eb0a7ec15c33879dd05afd76fc/diff:/var/lib/docker/overlay2/232e9ef7fdd260f8301362df93cab1dc3200d640f2f615ec6df4ac2e5ffac0d4/diff:/var/lib/docker/overlay2/c1ef095dbc099778c87aca10ffe884df314d91e44c7908e25fd5884276e2b8bb/diff:/var/lib/docker/overlay2/68ded1a0253488d28be4b5e8e1512bd69405b122bfa02806bcd33af163c81a06/diff:/var/lib/d
ocker/overlay2/a1b83aa2cc7e82102a28b3fbfbbf56096137d8d8029656951921ffd539a263d4/diff:/var/lib/docker/overlay2/dfa842b00004aa9877284bf900cfcaadf2b8314cfe1e2e000ebfbcce54fa5f02/diff:/var/lib/docker/overlay2/7df3755261310a01beb2ccaff67345e7f3e570ea29d2524d56abb88dbfb4be3b/diff:/var/lib/docker/overlay2/12c7073241acbdace6f0d231f90928145337557075d43b1980589a07ea109e42/diff:/var/lib/docker/overlay2/6f416a2a46d0f1aadb0b42a1ce4809361c80a390799e01cdd6c3de8abb5f784c/diff:/var/lib/docker/overlay2/55871e23789764420375c9d3d430cc04824ecaf1b8a9b7ba1857beec9be8b8ab/diff:/var/lib/docker/overlay2/a1b8b4759c5d13769ed38cc0c887441e28e6936879a5fb6acfac8854c0997daa/diff:/var/lib/docker/overlay2/7ed2860a60aa12d711a7d08e035097ca860ced4bfbeee73d58d71b64e3b397a7/diff:/var/lib/docker/overlay2/1836c614b6df7333043f6b9c91acd33232b7f09bce76e738619ad26afe5ece1a/diff:/var/lib/docker/overlay2/b7831147792adaf30cc5acd4390c5b6a02235d9a7259b284ac7068fe9f336d21/diff:/var/lib/docker/overlay2/4924c669704906a86aebf9308d35252576b810605304cf81f9dd0da8fea
ce018/diff:/var/lib/docker/overlay2/d993bcf0c53386f42c2189366b29cd8cbd0fcc4c997223e6a05c0092e43c4e77/diff:/var/lib/docker/overlay2/811aaddef9dda72dc55015f2339945a960195c242c516793600e539a438dd859/diff:/var/lib/docker/overlay2/c800e64cfad3619e218a4a15af05c46166a9f45b4d71c9c7e3cd62ea10f03c87/diff",
	                "MergedDir": "/var/lib/docker/overlay2/977b0be5387da43fd501f6136b614bfa07e79bc8a09aff3fac3e6d81269d00e8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/977b0be5387da43fd501f6136b614bfa07e79bc8a09aff3fac3e6d81269d00e8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/977b0be5387da43fd501f6136b614bfa07e79bc8a09aff3fac3e6d81269d00e8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-344332",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-344332/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-344332",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-344332",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-344332",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2b1f709062f4b6a6b5cd1b71d409e46fb0b7455eddf440b86df65f72d7a82224",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36357"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36356"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36353"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36355"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36354"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/2b1f709062f4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-344332": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "11195c640a51",
	                        "missing-upgrade-344332"
	                    ],
	                    "NetworkID": "f9e79522a5ad20c1e260198ad29dd198cd627c341612292dbf43a2aa8826e18f",
	                    "EndpointID": "1828b349abf84a76127aae1014e6a4a71205e738df095db22b0bfaeeab4822f2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-344332 -n missing-upgrade-344332
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-344332 -n missing-upgrade-344332: exit status 6 (451.170005ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0821 11:40:27.798245 2852870 status.go:415] kubeconfig endpoint: got: 192.168.59.13:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-344332" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-344332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-344332
E0821 11:40:27.833919 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-344332: (2.458165286s)
--- FAIL: TestMissingContainerUpgrade (145.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (93.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /tmp/minikube-v1.17.0.3628476767.exe start -p stopped-upgrade-816837 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0821 11:40:31.841661 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Done: /tmp/minikube-v1.17.0.3628476767.exe start -p stopped-upgrade-816837 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m5.533184747s)
version_upgrade_test.go:204: (dbg) Run:  /tmp/minikube-v1.17.0.3628476767.exe -p stopped-upgrade-816837 stop
version_upgrade_test.go:204: (dbg) Done: /tmp/minikube-v1.17.0.3628476767.exe -p stopped-upgrade-816837 stop: (20.678821701s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-816837 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-816837 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (6.780818645s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-816837] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-816837 in cluster stopped-upgrade-816837
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-816837" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 11:41:57.511926 2861225 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:41:57.512266 2861225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:41:57.512298 2861225 out.go:309] Setting ErrFile to fd 2...
	I0821 11:41:57.512319 2861225 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:41:57.512619 2861225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
	I0821 11:41:57.513087 2861225 out.go:303] Setting JSON to false
	I0821 11:41:57.514346 2861225 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":73461,"bootTime":1692544656,"procs":386,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0821 11:41:57.514448 2861225 start.go:138] virtualization:  
	I0821 11:41:57.517290 2861225 out.go:177] * [stopped-upgrade-816837] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0821 11:41:57.520020 2861225 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0821 11:41:57.533971 2861225 notify.go:220] Checking for updates...
	I0821 11:41:57.537681 2861225 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 11:41:57.539673 2861225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 11:41:57.541480 2861225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:41:57.543839 2861225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	I0821 11:41:57.545740 2861225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0821 11:41:57.548097 2861225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 11:41:57.550625 2861225 config.go:182] Loaded profile config "stopped-upgrade-816837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0821 11:41:57.552693 2861225 out.go:177] * Kubernetes 1.27.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.4
	I0821 11:41:57.554467 2861225 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 11:41:57.597031 2861225 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 11:41:57.597130 2861225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:41:57.738402 2861225 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0821 11:41:57.746919 2861225 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:44 SystemTime:2023-08-21 11:41:57.7315877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archite
cture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:41:57.747020 2861225 docker.go:294] overlay module found
	I0821 11:41:57.750982 2861225 out.go:177] * Using the docker driver based on existing profile
	I0821 11:41:57.752966 2861225 start.go:298] selected driver: docker
	I0821 11:41:57.752986 2861225 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-816837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-816837 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.198 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:41:57.753081 2861225 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 11:41:57.753721 2861225 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:41:57.841913 2861225 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:44 SystemTime:2023-08-21 11:41:57.832176972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:41:57.842246 2861225 cni.go:84] Creating CNI manager for ""
	I0821 11:41:57.842256 2861225 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 11:41:57.842267 2861225 start_flags.go:319] config:
	{Name:stopped-upgrade-816837 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-816837 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.198 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:41:57.844499 2861225 out.go:177] * Starting control plane node stopped-upgrade-816837 in cluster stopped-upgrade-816837
	I0821 11:41:57.846204 2861225 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 11:41:57.847862 2861225 out.go:177] * Pulling base image ...
	I0821 11:41:57.849628 2861225 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0821 11:41:57.849773 2861225 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0821 11:41:57.874888 2861225 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0821 11:41:57.874909 2861225 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0821 11:41:57.928230 2861225 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0821 11:41:57.928376 2861225 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/stopped-upgrade-816837/config.json ...
	I0821 11:41:57.928624 2861225 cache.go:195] Successfully downloaded all kic artifacts
	I0821 11:41:57.928689 2861225 start.go:365] acquiring machines lock for stopped-upgrade-816837: {Name:mk3952d4988efe21945a69f858c1255936e3df8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:41:57.928742 2861225 start.go:369] acquired machines lock for "stopped-upgrade-816837" in 30.162µs
	I0821 11:41:57.928755 2861225 start.go:96] Skipping create...Using existing machine configuration
	I0821 11:41:57.928760 2861225 fix.go:54] fixHost starting: 
	I0821 11:41:57.929020 2861225 cli_runner.go:164] Run: docker container inspect stopped-upgrade-816837 --format={{.State.Status}}
	I0821 11:41:57.929290 2861225 cache.go:107] acquiring lock: {Name:mk7f8a34da0b383537fabc9b6e390429eff319a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:41:57.929355 2861225 cache.go:115] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0821 11:41:57.929363 2861225 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 78.012µs
	I0821 11:41:57.929372 2861225 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0821 11:41:57.929380 2861225 cache.go:107] acquiring lock: {Name:mk4153eae7806e222b01564ed0cfe02695401b0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:41:57.929421 2861225 cache.go:115] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0821 11:41:57.929427 2861225 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 48.836µs
	I0821 11:41:57.929434 2861225 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0821 11:41:57.929441 2861225 cache.go:107] acquiring lock: {Name:mkf8e6a0246b532c0a6d1597764c57cbd46923e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:41:57.929470 2861225 cache.go:115] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0821 11:41:57.929475 2861225 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 35.289µs
	I0821 11:41:57.929482 2861225 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0821 11:41:57.929488 2861225 cache.go:107] acquiring lock: {Name:mk80067060178902457fd3cca5de63d48ad9f542 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:41:57.929512 2861225 cache.go:115] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0821 11:41:57.929517 2861225 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 30.08µs
	I0821 11:41:57.929523 2861225 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0821 11:41:57.929529 2861225 cache.go:107] acquiring lock: {Name:mk05a25396c7058d8baf370462f200fc462eb9af Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:41:57.929553 2861225 cache.go:115] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0821 11:41:57.929557 2861225 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 28.922µs
	I0821 11:41:57.929563 2861225 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0821 11:41:57.929569 2861225 cache.go:107] acquiring lock: {Name:mke90e9cef34726fd360e9e46700eff141b32ef8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:41:57.929594 2861225 cache.go:115] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0821 11:41:57.929599 2861225 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 30.301µs
	I0821 11:41:57.929605 2861225 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0821 11:41:57.929611 2861225 cache.go:107] acquiring lock: {Name:mk0492acfd72012e7e6b77986a4d0f1831851535 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:41:57.929633 2861225 cache.go:115] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0821 11:41:57.929637 2861225 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 27.503µs
	I0821 11:41:57.929643 2861225 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0821 11:41:57.929648 2861225 cache.go:107] acquiring lock: {Name:mka8fc0beab33e5fbcbe22d83cad21f967d544a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0821 11:41:57.929672 2861225 cache.go:115] /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0821 11:41:57.929676 2861225 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 28.668µs
	I0821 11:41:57.929682 2861225 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0821 11:41:57.929688 2861225 cache.go:87] Successfully saved all images to host disk.
	I0821 11:41:57.948728 2861225 fix.go:102] recreateIfNeeded on stopped-upgrade-816837: state=Stopped err=<nil>
	W0821 11:41:57.948757 2861225 fix.go:128] unexpected machine state, will restart: <nil>
	I0821 11:41:57.950961 2861225 out.go:177] * Restarting existing docker container for "stopped-upgrade-816837" ...
	I0821 11:41:57.953033 2861225 cli_runner.go:164] Run: docker start stopped-upgrade-816837
	I0821 11:41:58.335791 2861225 cli_runner.go:164] Run: docker container inspect stopped-upgrade-816837 --format={{.State.Status}}
	I0821 11:41:58.391914 2861225 kic.go:426] container "stopped-upgrade-816837" state is running.
	I0821 11:41:58.392350 2861225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-816837
	I0821 11:41:58.436195 2861225 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/stopped-upgrade-816837/config.json ...
	I0821 11:41:58.436419 2861225 machine.go:88] provisioning docker machine ...
	I0821 11:41:58.436433 2861225 ubuntu.go:169] provisioning hostname "stopped-upgrade-816837"
	I0821 11:41:58.436486 2861225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-816837
	I0821 11:41:58.458035 2861225 main.go:141] libmachine: Using SSH client type: native
	I0821 11:41:58.458472 2861225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36374 <nil> <nil>}
	I0821 11:41:58.458484 2861225 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-816837 && echo "stopped-upgrade-816837" | sudo tee /etc/hostname
	I0821 11:41:58.459206 2861225 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0821 11:42:01.623398 2861225 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-816837
	
	I0821 11:42:01.623565 2861225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-816837
	I0821 11:42:01.656390 2861225 main.go:141] libmachine: Using SSH client type: native
	I0821 11:42:01.656946 2861225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36374 <nil> <nil>}
	I0821 11:42:01.656966 2861225 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-816837' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-816837/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-816837' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0821 11:42:01.806819 2861225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0821 11:42:01.806840 2861225 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17102-2734539/.minikube CaCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17102-2734539/.minikube}
	I0821 11:42:01.806862 2861225 ubuntu.go:177] setting up certificates
	I0821 11:42:01.806871 2861225 provision.go:83] configureAuth start
	I0821 11:42:01.806932 2861225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-816837
	I0821 11:42:01.831296 2861225 provision.go:138] copyHostCerts
	I0821 11:42:01.831358 2861225 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem, removing ...
	I0821 11:42:01.831388 2861225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem
	I0821 11:42:01.831466 2861225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/key.pem (1675 bytes)
	I0821 11:42:01.831559 2861225 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem, removing ...
	I0821 11:42:01.831563 2861225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem
	I0821 11:42:01.831587 2861225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.pem (1078 bytes)
	I0821 11:42:01.831641 2861225 exec_runner.go:144] found /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem, removing ...
	I0821 11:42:01.831645 2861225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem
	I0821 11:42:01.831670 2861225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17102-2734539/.minikube/cert.pem (1123 bytes)
	I0821 11:42:01.831716 2861225 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-816837 san=[192.168.59.198 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-816837]
	I0821 11:42:02.059831 2861225 provision.go:172] copyRemoteCerts
	I0821 11:42:02.059902 2861225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0821 11:42:02.059951 2861225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-816837
	I0821 11:42:02.081763 2861225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36374 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/stopped-upgrade-816837/id_rsa Username:docker}
	I0821 11:42:02.183802 2861225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0821 11:42:02.209779 2861225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0821 11:42:02.236005 2861225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0821 11:42:02.263463 2861225 provision.go:86] duration metric: configureAuth took 456.579596ms
	I0821 11:42:02.263534 2861225 ubuntu.go:193] setting minikube options for container-runtime
	I0821 11:42:02.263770 2861225 config.go:182] Loaded profile config "stopped-upgrade-816837": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0821 11:42:02.263923 2861225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-816837
	I0821 11:42:02.295000 2861225 main.go:141] libmachine: Using SSH client type: native
	I0821 11:42:02.295425 2861225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3a0570] 0x3a2f00 <nil>  [] 0s} 127.0.0.1 36374 <nil> <nil>}
	I0821 11:42:02.295439 2861225 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0821 11:42:02.726955 2861225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0821 11:42:02.727023 2861225 machine.go:91] provisioned docker machine in 4.290593744s
	I0821 11:42:02.727047 2861225 start.go:300] post-start starting for "stopped-upgrade-816837" (driver="docker")
	I0821 11:42:02.727069 2861225 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0821 11:42:02.727154 2861225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0821 11:42:02.727234 2861225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-816837
	I0821 11:42:02.746382 2861225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36374 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/stopped-upgrade-816837/id_rsa Username:docker}
	I0821 11:42:02.847620 2861225 ssh_runner.go:195] Run: cat /etc/os-release
	I0821 11:42:02.851899 2861225 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0821 11:42:02.851922 2861225 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0821 11:42:02.851932 2861225 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0821 11:42:02.851939 2861225 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0821 11:42:02.851948 2861225 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-2734539/.minikube/addons for local assets ...
	I0821 11:42:02.851998 2861225 filesync.go:126] Scanning /home/jenkins/minikube-integration/17102-2734539/.minikube/files for local assets ...
	I0821 11:42:02.852078 2861225 filesync.go:149] local asset: /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem -> 27399302.pem in /etc/ssl/certs
	I0821 11:42:02.852179 2861225 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0821 11:42:02.861016 2861225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/ssl/certs/27399302.pem --> /etc/ssl/certs/27399302.pem (1708 bytes)
	I0821 11:42:02.885720 2861225 start.go:303] post-start completed in 158.645917ms
	I0821 11:42:02.885863 2861225 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 11:42:02.885992 2861225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-816837
	I0821 11:42:02.909629 2861225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36374 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/stopped-upgrade-816837/id_rsa Username:docker}
	I0821 11:42:03.012644 2861225 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0821 11:42:03.019895 2861225 fix.go:56] fixHost completed within 5.091126233s
	I0821 11:42:03.019916 2861225 start.go:83] releasing machines lock for "stopped-upgrade-816837", held for 5.091165871s
	I0821 11:42:03.019986 2861225 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-816837
	I0821 11:42:03.041908 2861225 ssh_runner.go:195] Run: cat /version.json
	I0821 11:42:03.041944 2861225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0821 11:42:03.041972 2861225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-816837
	I0821 11:42:03.042013 2861225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-816837
	I0821 11:42:03.078851 2861225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36374 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/stopped-upgrade-816837/id_rsa Username:docker}
	I0821 11:42:03.096763 2861225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36374 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/stopped-upgrade-816837/id_rsa Username:docker}
	W0821 11:42:03.191033 2861225 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0821 11:42:03.191114 2861225 ssh_runner.go:195] Run: systemctl --version
	I0821 11:42:03.269054 2861225 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0821 11:42:03.552888 2861225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0821 11:42:03.558788 2861225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:42:03.584250 2861225 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0821 11:42:03.584387 2861225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0821 11:42:03.624635 2861225 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0821 11:42:03.624696 2861225 start.go:466] detecting cgroup driver to use...
	I0821 11:42:03.624742 2861225 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0821 11:42:03.624819 2861225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0821 11:42:03.663159 2861225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0821 11:42:03.677285 2861225 docker.go:196] disabling cri-docker service (if available) ...
	I0821 11:42:03.677384 2861225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0821 11:42:03.691094 2861225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0821 11:42:03.703979 2861225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0821 11:42:03.717935 2861225 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0821 11:42:03.718047 2861225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0821 11:42:03.856213 2861225 docker.go:212] disabling docker service ...
	I0821 11:42:03.856353 2861225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0821 11:42:03.872351 2861225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0821 11:42:03.885659 2861225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0821 11:42:04.019025 2861225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0821 11:42:04.160068 2861225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0821 11:42:04.174746 2861225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0821 11:42:04.193220 2861225 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0821 11:42:04.193316 2861225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0821 11:42:04.208425 2861225 out.go:177] 
	W0821 11:42:04.210551 2861225 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0821 11:42:04.210571 2861225 out.go:239] * 
	* 
	W0821 11:42:04.211485 2861225 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0821 11:42:04.215153 2861225 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-816837 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (93.01s)

                                                
                                    

Test pass (271/310)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 19.52
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.27.4/json-events 9.73
11 TestDownloadOnly/v1.27.4/preload-exists 0
15 TestDownloadOnly/v1.27.4/LogsDuration 0.07
17 TestDownloadOnly/v1.28.0-rc.1/json-events 10.2
18 TestDownloadOnly/v1.28.0-rc.1/preload-exists 0
22 TestDownloadOnly/v1.28.0-rc.1/LogsDuration 0.18
23 TestDownloadOnly/DeleteAll 0.39
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.22
26 TestBinaryMirror 0.59
29 TestAddons/Setup 170.35
31 TestAddons/parallel/Registry 16.61
33 TestAddons/parallel/InspektorGadget 10.78
34 TestAddons/parallel/MetricsServer 5.82
37 TestAddons/parallel/CSI 54.34
38 TestAddons/parallel/Headlamp 12.76
39 TestAddons/parallel/CloudSpanner 5.78
42 TestAddons/serial/GCPAuth/Namespaces 0.18
43 TestAddons/StoppedEnableDisable 12.34
44 TestCertOptions 39.4
45 TestCertExpiration 259.75
47 TestForceSystemdFlag 42.4
48 TestForceSystemdEnv 40.3
54 TestErrorSpam/setup 27.96
55 TestErrorSpam/start 0.82
56 TestErrorSpam/status 1.1
57 TestErrorSpam/pause 1.79
58 TestErrorSpam/unpause 1.95
59 TestErrorSpam/stop 1.45
62 TestFunctional/serial/CopySyncFile 0
63 TestFunctional/serial/StartWithProxy 77.87
64 TestFunctional/serial/AuditLog 0
65 TestFunctional/serial/SoftStart 30.96
66 TestFunctional/serial/KubeContext 0.07
67 TestFunctional/serial/KubectlGetPods 0.09
70 TestFunctional/serial/CacheCmd/cache/add_remote 4.02
71 TestFunctional/serial/CacheCmd/cache/add_local 1.12
72 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
73 TestFunctional/serial/CacheCmd/cache/list 0.06
74 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
75 TestFunctional/serial/CacheCmd/cache/cache_reload 2.18
76 TestFunctional/serial/CacheCmd/cache/delete 0.11
77 TestFunctional/serial/MinikubeKubectlCmd 0.13
78 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
79 TestFunctional/serial/ExtraConfig 37.25
80 TestFunctional/serial/ComponentHealth 0.11
81 TestFunctional/serial/LogsCmd 1.81
82 TestFunctional/serial/LogsFileCmd 1.86
83 TestFunctional/serial/InvalidService 4.38
85 TestFunctional/parallel/ConfigCmd 0.5
87 TestFunctional/parallel/DryRun 0.48
88 TestFunctional/parallel/InternationalLanguage 0.2
89 TestFunctional/parallel/StatusCmd 1.09
93 TestFunctional/parallel/ServiceCmdConnect 7.73
94 TestFunctional/parallel/AddonsCmd 0.23
95 TestFunctional/parallel/PersistentVolumeClaim 46.61
97 TestFunctional/parallel/SSHCmd 0.77
98 TestFunctional/parallel/CpCmd 1.38
100 TestFunctional/parallel/FileSync 0.37
101 TestFunctional/parallel/CertSync 2.26
105 TestFunctional/parallel/NodeLabels 0.09
107 TestFunctional/parallel/NonActiveRuntimeDisabled 0.71
109 TestFunctional/parallel/License 0.49
110 TestFunctional/parallel/Version/short 0.1
111 TestFunctional/parallel/Version/components 1.22
112 TestFunctional/parallel/ImageCommands/ImageListShort 0.33
113 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
114 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
115 TestFunctional/parallel/ImageCommands/ImageListYaml 0.39
116 TestFunctional/parallel/ImageCommands/ImageBuild 3.81
117 TestFunctional/parallel/ImageCommands/Setup 1.81
118 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
119 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
120 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.93
122 TestFunctional/parallel/ServiceCmd/DeployApp 11.32
123 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.01
124 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.96
125 TestFunctional/parallel/ServiceCmd/List 0.46
126 TestFunctional/parallel/ServiceCmd/JSONOutput 0.43
127 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
128 TestFunctional/parallel/ServiceCmd/Format 0.52
129 TestFunctional/parallel/ServiceCmd/URL 0.59
131 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.29
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.8
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.3
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.02
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
146 TestFunctional/parallel/ProfileCmd/profile_list 0.4
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
148 TestFunctional/parallel/MountCmd/any-port 8.93
149 TestFunctional/parallel/MountCmd/specific-port 2.27
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.94
151 TestFunctional/delete_addon-resizer_images 0.09
152 TestFunctional/delete_my-image_image 0.02
153 TestFunctional/delete_minikube_cached_images 0.02
157 TestIngressAddonLegacy/StartLegacyK8sCluster 90.4
159 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 11.5
160 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.83
164 TestJSONOutput/start/Command 75.78
165 TestJSONOutput/start/Audit 0
167 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
168 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
170 TestJSONOutput/pause/Command 0.81
171 TestJSONOutput/pause/Audit 0
173 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/unpause/Command 0.76
177 TestJSONOutput/unpause/Audit 0
179 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/stop/Command 5.97
183 TestJSONOutput/stop/Audit 0
185 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
187 TestErrorJSONOutput 0.28
189 TestKicCustomNetwork/create_custom_network 42.59
190 TestKicCustomNetwork/use_default_bridge_network 33.01
191 TestKicExistingNetwork 35.03
192 TestKicCustomSubnet 39.72
193 TestKicStaticIP 36.72
194 TestMainNoArgs 0.05
195 TestMinikubeProfile 73.58
198 TestMountStart/serial/StartWithMountFirst 7.11
199 TestMountStart/serial/VerifyMountFirst 0.27
200 TestMountStart/serial/StartWithMountSecond 6.9
201 TestMountStart/serial/VerifyMountSecond 0.28
202 TestMountStart/serial/DeleteFirst 1.71
203 TestMountStart/serial/VerifyMountPostDelete 0.29
204 TestMountStart/serial/Stop 1.26
205 TestMountStart/serial/RestartStopped 8.09
206 TestMountStart/serial/VerifyMountPostStop 0.27
209 TestMultiNode/serial/FreshStart2Nodes 125.42
210 TestMultiNode/serial/DeployApp2Nodes 5.87
212 TestMultiNode/serial/AddNode 47.66
213 TestMultiNode/serial/ProfileList 0.47
214 TestMultiNode/serial/CopyFile 10.68
215 TestMultiNode/serial/StopNode 2.3
216 TestMultiNode/serial/StartAfterStop 12.19
217 TestMultiNode/serial/RestartKeepsNodes 123.51
218 TestMultiNode/serial/DeleteNode 5.16
219 TestMultiNode/serial/StopMultiNode 24.02
220 TestMultiNode/serial/RestartMultiNode 86.07
221 TestMultiNode/serial/ValidateNameConflict 35.28
226 TestPreload 180.36
228 TestScheduledStopUnix 107.51
231 TestInsufficientStorage 10.91
234 TestKubernetesUpgrade 131.89
237 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
238 TestNoKubernetes/serial/StartWithK8s 44.24
239 TestNoKubernetes/serial/StartWithStopK8s 11.27
240 TestNoKubernetes/serial/Start 9.9
241 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
242 TestNoKubernetes/serial/ProfileList 1.12
243 TestNoKubernetes/serial/Stop 1.31
244 TestNoKubernetes/serial/StartNoArgs 8.16
245 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.45
246 TestStoppedBinaryUpgrade/Setup 0.95
248 TestStoppedBinaryUpgrade/MinikubeLogs 0.78
257 TestPause/serial/Start 55.83
258 TestPause/serial/SecondStartNoReconfiguration 44.35
266 TestNetworkPlugins/group/false 3.82
270 TestPause/serial/Pause 1.19
271 TestPause/serial/VerifyStatus 0.41
272 TestPause/serial/Unpause 0.96
273 TestPause/serial/PauseAgain 1.39
274 TestPause/serial/DeletePaused 3.01
275 TestPause/serial/VerifyDeletedResources 0.58
277 TestStartStop/group/old-k8s-version/serial/FirstStart 127.01
278 TestStartStop/group/old-k8s-version/serial/DeployApp 10.59
279 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.05
280 TestStartStop/group/old-k8s-version/serial/Stop 12.12
281 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
282 TestStartStop/group/old-k8s-version/serial/SecondStart 81.47
284 TestStartStop/group/no-preload/serial/FirstStart 71.33
285 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
286 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
287 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.47
288 TestStartStop/group/old-k8s-version/serial/Pause 4.26
290 TestStartStop/group/embed-certs/serial/FirstStart 81.44
291 TestStartStop/group/no-preload/serial/DeployApp 10.73
292 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.81
293 TestStartStop/group/no-preload/serial/Stop 12.43
294 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
295 TestStartStop/group/no-preload/serial/SecondStart 348.08
296 TestStartStop/group/embed-certs/serial/DeployApp 9.62
297 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.23
298 TestStartStop/group/embed-certs/serial/Stop 12.13
299 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
300 TestStartStop/group/embed-certs/serial/SecondStart 354.12
301 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.03
302 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
303 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.43
304 TestStartStop/group/no-preload/serial/Pause 4.71
306 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.2
307 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 16.03
308 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
309 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.35
310 TestStartStop/group/embed-certs/serial/Pause 3.43
312 TestStartStop/group/newest-cni/serial/FirstStart 47.09
313 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.62
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.12
315 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.43
316 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
317 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 353.96
318 TestStartStop/group/newest-cni/serial/DeployApp 0
319 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.08
320 TestStartStop/group/newest-cni/serial/Stop 1.26
321 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
322 TestStartStop/group/newest-cni/serial/SecondStart 30.73
323 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
324 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
325 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.39
326 TestStartStop/group/newest-cni/serial/Pause 3.09
327 TestNetworkPlugins/group/auto/Start 51.68
328 TestNetworkPlugins/group/auto/KubeletFlags 0.32
329 TestNetworkPlugins/group/auto/NetCatPod 10.4
330 TestNetworkPlugins/group/auto/DNS 0.22
331 TestNetworkPlugins/group/auto/Localhost 0.2
332 TestNetworkPlugins/group/auto/HairPin 0.19
333 TestNetworkPlugins/group/kindnet/Start 79.94
334 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
335 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
336 TestNetworkPlugins/group/kindnet/NetCatPod 10.38
337 TestNetworkPlugins/group/kindnet/DNS 0.22
338 TestNetworkPlugins/group/kindnet/Localhost 0.18
339 TestNetworkPlugins/group/kindnet/HairPin 0.19
340 TestNetworkPlugins/group/calico/Start 68.9
341 TestNetworkPlugins/group/calico/ControllerPod 5.05
342 TestNetworkPlugins/group/calico/KubeletFlags 0.47
343 TestNetworkPlugins/group/calico/NetCatPod 13.68
344 TestNetworkPlugins/group/calico/DNS 0.27
345 TestNetworkPlugins/group/calico/Localhost 0.19
346 TestNetworkPlugins/group/calico/HairPin 0.22
347 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 12.08
348 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
349 TestNetworkPlugins/group/custom-flannel/Start 76.47
350 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.48
351 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.27
352 TestNetworkPlugins/group/enable-default-cni/Start 91.89
353 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
354 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.44
355 TestNetworkPlugins/group/custom-flannel/DNS 0.22
356 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
357 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
358 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.42
359 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.61
360 TestNetworkPlugins/group/flannel/Start 71.71
361 TestNetworkPlugins/group/enable-default-cni/DNS 0.27
362 TestNetworkPlugins/group/enable-default-cni/Localhost 0.25
363 TestNetworkPlugins/group/enable-default-cni/HairPin 0.33
364 TestNetworkPlugins/group/bridge/Start 52.7
365 TestNetworkPlugins/group/flannel/ControllerPod 5.05
366 TestNetworkPlugins/group/flannel/KubeletFlags 0.42
367 TestNetworkPlugins/group/flannel/NetCatPod 11.43
368 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
369 TestNetworkPlugins/group/bridge/NetCatPod 10.36
370 TestNetworkPlugins/group/flannel/DNS 0.22
371 TestNetworkPlugins/group/flannel/Localhost 0.16
372 TestNetworkPlugins/group/flannel/HairPin 0.19
373 TestNetworkPlugins/group/bridge/DNS 0.22
374 TestNetworkPlugins/group/bridge/Localhost 0.18
375 TestNetworkPlugins/group/bridge/HairPin 0.19
x
+
TestDownloadOnly/v1.16.0/json-events (19.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-658925 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-658925 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (19.522688198s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (19.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-658925
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-658925: exit status 85 (80.209983ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-658925 | jenkins | v1.31.2 | 21 Aug 23 11:01 UTC |          |
	|         | -p download-only-658925        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 11:01:55
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 11:01:55.654097 2739936 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:01:55.654216 2739936 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:01:55.654226 2739936 out.go:309] Setting ErrFile to fd 2...
	I0821 11:01:55.654231 2739936 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:01:55.654480 2739936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
	W0821 11:01:55.654612 2739936 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17102-2734539/.minikube/config/config.json: open /home/jenkins/minikube-integration/17102-2734539/.minikube/config/config.json: no such file or directory
	I0821 11:01:55.654983 2739936 out.go:303] Setting JSON to true
	I0821 11:01:55.655916 2739936 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":71059,"bootTime":1692544656,"procs":276,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0821 11:01:55.655976 2739936 start.go:138] virtualization:  
	I0821 11:01:55.659459 2739936 out.go:97] [download-only-658925] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0821 11:01:55.661896 2739936 out.go:169] MINIKUBE_LOCATION=17102
	W0821 11:01:55.659758 2739936 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball: no such file or directory
	I0821 11:01:55.659825 2739936 notify.go:220] Checking for updates...
	I0821 11:01:55.666041 2739936 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 11:01:55.668213 2739936 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:01:55.670309 2739936 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	I0821 11:01:55.672487 2739936 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0821 11:01:55.676692 2739936 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0821 11:01:55.676985 2739936 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 11:01:55.700508 2739936 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 11:01:55.700595 2739936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:01:55.785574 2739936 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-08-21 11:01:55.776240481 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:01:55.785674 2739936 docker.go:294] overlay module found
	I0821 11:01:55.787977 2739936 out.go:97] Using the docker driver based on user configuration
	I0821 11:01:55.788003 2739936 start.go:298] selected driver: docker
	I0821 11:01:55.788010 2739936 start.go:902] validating driver "docker" against <nil>
	I0821 11:01:55.788118 2739936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:01:55.857005 2739936 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-08-21 11:01:55.847751511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:01:55.857181 2739936 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0821 11:01:55.857438 2739936 start_flags.go:382] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0821 11:01:55.857587 2739936 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0821 11:01:55.860083 2739936 out.go:169] Using Docker driver with root privileges
	I0821 11:01:55.862241 2739936 cni.go:84] Creating CNI manager for ""
	I0821 11:01:55.862257 2739936 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 11:01:55.862267 2739936 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0821 11:01:55.862277 2739936 start_flags.go:319] config:
	{Name:download-only-658925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-658925 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:01:55.864606 2739936 out.go:97] Starting control plane node download-only-658925 in cluster download-only-658925
	I0821 11:01:55.864622 2739936 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 11:01:55.866802 2739936 out.go:97] Pulling base image ...
	I0821 11:01:55.866847 2739936 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0821 11:01:55.866929 2739936 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0821 11:01:55.882981 2739936 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0821 11:01:55.883771 2739936 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0821 11:01:55.883914 2739936 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0821 11:01:55.923885 2739936 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0821 11:01:55.923913 2739936 cache.go:57] Caching tarball of preloaded images
	I0821 11:01:55.924052 2739936 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0821 11:01:55.926712 2739936 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0821 11:01:55.926733 2739936 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0821 11:01:56.050151 2739936 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0821 11:02:01.095229 2739936 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0821 11:02:07.949576 2739936 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0821 11:02:07.949703 2739936 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0821 11:02:08.917462 2739936 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0821 11:02:08.917810 2739936 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/download-only-658925/config.json ...
	I0821 11:02:08.917842 2739936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/download-only-658925/config.json: {Name:mk40bfbff534ec1c73159931518858806e65360a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0821 11:02:08.918407 2739936 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0821 11:02:08.919015 2739936 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/arm64/kubectl.sha1 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/linux/arm64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-658925"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/json-events (9.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-658925 --force --alsologtostderr --kubernetes-version=v1.27.4 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-658925 --force --alsologtostderr --kubernetes-version=v1.27.4 --container-runtime=crio --driver=docker  --container-runtime=crio: (9.731837968s)
--- PASS: TestDownloadOnly/v1.27.4/json-events (9.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/preload-exists
--- PASS: TestDownloadOnly/v1.27.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-658925
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-658925: exit status 85 (73.846607ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-658925 | jenkins | v1.31.2 | 21 Aug 23 11:01 UTC |          |
	|         | -p download-only-658925        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-658925 | jenkins | v1.31.2 | 21 Aug 23 11:02 UTC |          |
	|         | -p download-only-658925        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 11:02:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 11:02:15.262006 2740020 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:02:15.262326 2740020 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:02:15.262335 2740020 out.go:309] Setting ErrFile to fd 2...
	I0821 11:02:15.262340 2740020 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:02:15.262574 2740020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
	W0821 11:02:15.262700 2740020 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17102-2734539/.minikube/config/config.json: open /home/jenkins/minikube-integration/17102-2734539/.minikube/config/config.json: no such file or directory
	I0821 11:02:15.262941 2740020 out.go:303] Setting JSON to true
	I0821 11:02:15.264059 2740020 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":71079,"bootTime":1692544656,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0821 11:02:15.264129 2740020 start.go:138] virtualization:  
	I0821 11:02:15.266983 2740020 out.go:97] [download-only-658925] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0821 11:02:15.269195 2740020 out.go:169] MINIKUBE_LOCATION=17102
	I0821 11:02:15.267311 2740020 notify.go:220] Checking for updates...
	I0821 11:02:15.274367 2740020 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 11:02:15.277227 2740020 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:02:15.279373 2740020 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	I0821 11:02:15.281604 2740020 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0821 11:02:15.286784 2740020 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0821 11:02:15.287335 2740020 config.go:182] Loaded profile config "download-only-658925": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0821 11:02:15.287442 2740020 start.go:810] api.Load failed for download-only-658925: filestore "download-only-658925": Docker machine "download-only-658925" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0821 11:02:15.287567 2740020 driver.go:373] Setting default libvirt URI to qemu:///system
	W0821 11:02:15.287592 2740020 start.go:810] api.Load failed for download-only-658925: filestore "download-only-658925": Docker machine "download-only-658925" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0821 11:02:15.310541 2740020 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 11:02:15.310629 2740020 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:02:15.400533 2740020 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-21 11:02:15.391236851 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:02:15.400643 2740020 docker.go:294] overlay module found
	I0821 11:02:15.402963 2740020 out.go:97] Using the docker driver based on existing profile
	I0821 11:02:15.402987 2740020 start.go:298] selected driver: docker
	I0821 11:02:15.403000 2740020 start.go:902] validating driver "docker" against &{Name:download-only-658925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-658925 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0}
	I0821 11:02:15.403183 2740020 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:02:15.465565 2740020 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-21 11:02:15.456243781 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:02:15.466059 2740020 cni.go:84] Creating CNI manager for ""
	I0821 11:02:15.466076 2740020 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 11:02:15.466089 2740020 start_flags.go:319] config:
	{Name:download-only-658925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:download-only-658925 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Ne
tworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:02:15.468239 2740020 out.go:97] Starting control plane node download-only-658925 in cluster download-only-658925
	I0821 11:02:15.468258 2740020 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 11:02:15.470320 2740020 out.go:97] Pulling base image ...
	I0821 11:02:15.470345 2740020 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 11:02:15.470439 2740020 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0821 11:02:15.487137 2740020 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0821 11:02:15.487262 2740020 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0821 11:02:15.487285 2740020 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0821 11:02:15.487296 2740020 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0821 11:02:15.487304 2740020 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0821 11:02:15.534319 2740020 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.4/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4
	I0821 11:02:15.534345 2740020 cache.go:57] Caching tarball of preloaded images
	I0821 11:02:15.534999 2740020 preload.go:132] Checking if preload exists for k8s version v1.27.4 and runtime crio
	I0821 11:02:15.537578 2740020 out.go:97] Downloading Kubernetes v1.27.4 preload ...
	I0821 11:02:15.537603 2740020 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4 ...
	I0821 11:02:15.658282 2740020 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.4/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4?checksum=md5:94c43c28edd6dc9f776b15426d1b273c -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.4-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-658925"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/json-events (10.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-658925 --force --alsologtostderr --kubernetes-version=v1.28.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-658925 --force --alsologtostderr --kubernetes-version=v1.28.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.196151999s)
--- PASS: TestDownloadOnly/v1.28.0-rc.1/json-events (10.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/LogsDuration (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-658925
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-658925: exit status 85 (181.441824ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-658925 | jenkins | v1.31.2 | 21 Aug 23 11:01 UTC |          |
	|         | -p download-only-658925           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-658925 | jenkins | v1.31.2 | 21 Aug 23 11:02 UTC |          |
	|         | -p download-only-658925           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-658925 | jenkins | v1.31.2 | 21 Aug 23 11:02 UTC |          |
	|         | -p download-only-658925           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.0-rc.1 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=docker                   |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/08/21 11:02:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.20.7 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0821 11:02:25.066958 2740094 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:02:25.067111 2740094 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:02:25.067122 2740094 out.go:309] Setting ErrFile to fd 2...
	I0821 11:02:25.067128 2740094 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:02:25.067371 2740094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
	W0821 11:02:25.067488 2740094 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17102-2734539/.minikube/config/config.json: open /home/jenkins/minikube-integration/17102-2734539/.minikube/config/config.json: no such file or directory
	I0821 11:02:25.067705 2740094 out.go:303] Setting JSON to true
	I0821 11:02:25.068627 2740094 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":71089,"bootTime":1692544656,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0821 11:02:25.068687 2740094 start.go:138] virtualization:  
	I0821 11:02:25.071405 2740094 out.go:97] [download-only-658925] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0821 11:02:25.073365 2740094 out.go:169] MINIKUBE_LOCATION=17102
	I0821 11:02:25.071757 2740094 notify.go:220] Checking for updates...
	I0821 11:02:25.078051 2740094 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 11:02:25.080216 2740094 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:02:25.082020 2740094 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	I0821 11:02:25.083947 2740094 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0821 11:02:25.087462 2740094 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0821 11:02:25.088010 2740094 config.go:182] Loaded profile config "download-only-658925": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	W0821 11:02:25.088108 2740094 start.go:810] api.Load failed for download-only-658925: filestore "download-only-658925": Docker machine "download-only-658925" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0821 11:02:25.088261 2740094 driver.go:373] Setting default libvirt URI to qemu:///system
	W0821 11:02:25.088287 2740094 start.go:810] api.Load failed for download-only-658925: filestore "download-only-658925": Docker machine "download-only-658925" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0821 11:02:25.114284 2740094 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 11:02:25.114383 2740094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:02:25.202898 2740094 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-21 11:02:25.192194653 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:02:25.203004 2740094 docker.go:294] overlay module found
	I0821 11:02:25.204952 2740094 out.go:97] Using the docker driver based on existing profile
	I0821 11:02:25.204981 2740094 start.go:298] selected driver: docker
	I0821 11:02:25.204988 2740094 start.go:902] validating driver "docker" against &{Name:download-only-658925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:download-only-658925 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0}
	I0821 11:02:25.205168 2740094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:02:25.271836 2740094 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-08-21 11:02:25.262474126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:02:25.272324 2740094 cni.go:84] Creating CNI manager for ""
	I0821 11:02:25.272333 2740094 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0821 11:02:25.272344 2740094 start_flags.go:319] config:
	{Name:download-only-658925 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0-rc.1 ClusterName:download-only-658925 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:02:25.274343 2740094 out.go:97] Starting control plane node download-only-658925 in cluster download-only-658925
	I0821 11:02:25.274364 2740094 cache.go:122] Beginning downloading kic base image for docker with crio
	I0821 11:02:25.276267 2740094 out.go:97] Pulling base image ...
	I0821 11:02:25.276288 2740094 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0821 11:02:25.276432 2740094 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0821 11:02:25.292856 2740094 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0821 11:02:25.292986 2740094 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0821 11:02:25.293007 2740094 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0821 11:02:25.293012 2740094 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0821 11:02:25.293030 2740094 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0821 11:02:25.349103 2740094 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0-rc.1/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I0821 11:02:25.349138 2740094 cache.go:57] Caching tarball of preloaded images
	I0821 11:02:25.349297 2740094 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0821 11:02:25.351313 2740094 out.go:97] Downloading Kubernetes v1.28.0-rc.1 preload ...
	I0821 11:02:25.351331 2740094 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-arm64.tar.lz4 ...
	I0821 11:02:25.469638 2740094 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0-rc.1/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:96006785d2ea4f2ebb9d7bbb45276a95 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-arm64.tar.lz4
	I0821 11:02:33.638243 2740094 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-arm64.tar.lz4 ...
	I0821 11:02:33.638346 2740094 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-rc.1-cri-o-overlay-arm64.tar.lz4 ...
	I0821 11:02:34.491427 2740094 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.0-rc.1 on crio
	I0821 11:02:34.491581 2740094 profile.go:148] Saving config to /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/download-only-658925/config.json ...
	I0821 11:02:34.491802 2740094 preload.go:132] Checking if preload exists for k8s version v1.28.0-rc.1 and runtime crio
	I0821 11:02:34.492464 2740094 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.0-rc.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0-rc.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/17102-2734539/.minikube/cache/linux/arm64/v1.28.0-rc.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-658925"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0-rc.1/LogsDuration (0.18s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.39s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-658925
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-393918 --alsologtostderr --binary-mirror http://127.0.0.1:38977 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-393918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-393918
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/Setup (170.35s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-arm64 start -p addons-664125 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-linux-arm64 start -p addons-664125 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m50.345549311s)
--- PASS: TestAddons/Setup (170.35s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 67.361545ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-t9w8c" [2aaee73c-950c-479b-a2ea-af5439687b4f] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.019103197s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ngqhd" [4b1b47d2-6796-4b8a-97ae-2699b8f2d4af] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.020765068s
addons_test.go:316: (dbg) Run:  kubectl --context addons-664125 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-664125 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-664125 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.435074991s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-arm64 -p addons-664125 ip
2023/08/21 11:05:43 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p addons-664125 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.61s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-qp8zq" [02e9db41-89f5-4d0d-bb9d-c6e45f53fd7c] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.015808342s
addons_test.go:817: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-664125
addons_test.go:817: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-664125: (5.765916164s)
--- PASS: TestAddons/parallel/InspektorGadget (10.78s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 9.332253ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7746886d4f-prk24" [9b054fcf-f0c3-405d-bdb4-e0cce366a51c] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.012559952s
addons_test.go:391: (dbg) Run:  kubectl --context addons-664125 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p addons-664125 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.82s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.34s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 9.349763ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-664125 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-664125 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f5d83836-2d39-4d1b-aa04-9f9f04a5e6b5] Pending
helpers_test.go:344: "task-pv-pod" [f5d83836-2d39-4d1b-aa04-9f9f04a5e6b5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f5d83836-2d39-4d1b-aa04-9f9f04a5e6b5] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.011715953s
addons_test.go:560: (dbg) Run:  kubectl --context addons-664125 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-664125 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-664125 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-664125 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-664125 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-664125 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-664125 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-664125 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-664125 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [10fdcc03-d79f-43a5-b96b-98d1253c160e] Pending
helpers_test.go:344: "task-pv-pod-restore" [10fdcc03-d79f-43a5-b96b-98d1253c160e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [10fdcc03-d79f-43a5-b96b-98d1253c160e] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.024530456s
addons_test.go:602: (dbg) Run:  kubectl --context addons-664125 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-664125 delete pod task-pv-pod-restore: (1.041208236s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-664125 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-664125 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-arm64 -p addons-664125 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-arm64 -p addons-664125 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.822220331s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-arm64 -p addons-664125 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.34s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-664125 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-664125 --alsologtostderr -v=1: (1.722944254s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5c78f74d8d-wj9pg" [83577996-2d5a-4349-855d-46c45972833b] Pending
helpers_test.go:344: "headlamp-5c78f74d8d-wj9pg" [83577996-2d5a-4349-855d-46c45972833b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5c78f74d8d-wj9pg" [83577996-2d5a-4349-855d-46c45972833b] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.038278436s
--- PASS: TestAddons/parallel/Headlamp (12.76s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-d67854dc9-qlm47" [744b875b-db80-477c-bbcd-34ec846fef89] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.016808703s
addons_test.go:836: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-664125
--- PASS: TestAddons/parallel/CloudSpanner (5.78s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-664125 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-664125 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-664125
addons_test.go:148: (dbg) Done: out/minikube-linux-arm64 stop -p addons-664125: (12.069527538s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-664125
addons_test.go:156: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-664125
addons_test.go:161: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-664125
--- PASS: TestAddons/StoppedEnableDisable (12.34s)

                                                
                                    
x
+
TestCertOptions (39.4s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-802660 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-802660 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.638454095s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-802660 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-802660 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-802660 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-802660" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-802660
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-802660: (2.08124962s)
--- PASS: TestCertOptions (39.40s)

                                                
                                    
x
+
TestCertExpiration (259.75s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-700917 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-700917 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (41.496295703s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-700917 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E0821 11:47:39.857347 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-700917 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (35.542811884s)
helpers_test.go:175: Cleaning up "cert-expiration-700917" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-700917
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-700917: (2.712556785s)
--- PASS: TestCertExpiration (259.75s)

                                                
                                    
x
+
TestForceSystemdFlag (42.4s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-060923 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-060923 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.368938006s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-060923 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-060923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-060923
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-060923: (2.604416792s)
--- PASS: TestForceSystemdFlag (42.40s)

                                                
                                    
x
+
TestForceSystemdEnv (40.3s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-144549 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-144549 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.73868249s)
helpers_test.go:175: Cleaning up "force-systemd-env-144549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-144549
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-144549: (2.564733551s)
--- PASS: TestForceSystemdEnv (40.30s)

                                                
                                    
x
+
TestErrorSpam/setup (27.96s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-294502 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-294502 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-294502 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-294502 --driver=docker  --container-runtime=crio: (27.959499904s)
--- PASS: TestErrorSpam/setup (27.96s)

                                                
                                    
x
+
TestErrorSpam/start (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-294502 --log_dir /tmp/nospam-294502 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-294502 --log_dir /tmp/nospam-294502 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-294502 --log_dir /tmp/nospam-294502 start --dry-run
--- PASS: TestErrorSpam/start (0.82s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-294502 --log_dir /tmp/nospam-294502 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-294502 --log_dir /tmp/nospam-294502 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-294502 --log_dir /tmp/nospam-294502 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-294502 --log_dir /tmp/nospam-294502 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-294502 --log_dir /tmp/nospam-294502 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-294502 --log_dir /tmp/nospam-294502 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.95s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-294502 --log_dir /tmp/nospam-294502 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-294502 --log_dir /tmp/nospam-294502 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-294502 --log_dir /tmp/nospam-294502 unpause
--- PASS: TestErrorSpam/unpause (1.95s)

                                                
                                    
x
+
TestErrorSpam/stop (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-294502 --log_dir /tmp/nospam-294502 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-294502 --log_dir /tmp/nospam-294502 stop: (1.25310592s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-294502 --log_dir /tmp/nospam-294502 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-294502 --log_dir /tmp/nospam-294502 stop
--- PASS: TestErrorSpam/stop (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17102-2734539/.minikube/files/etc/test/nested/copy/2739930/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.87s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-723696 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0821 11:10:27.832446 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 11:10:27.841254 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 11:10:27.851474 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 11:10:27.871730 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 11:10:27.911974 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 11:10:27.992264 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 11:10:28.152617 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 11:10:28.473285 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 11:10:29.113726 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 11:10:30.394842 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 11:10:32.956632 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 11:10:38.077746 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 11:10:48.317976 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 11:11:08.798653 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-723696 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m17.870341811s)
--- PASS: TestFunctional/serial/StartWithProxy (77.87s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.96s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-723696 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-723696 --alsologtostderr -v=8: (30.957363185s)
functional_test.go:659: soft start took 30.963667544s for "functional-723696" cluster.
--- PASS: TestFunctional/serial/SoftStart (30.96s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-723696 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-723696 cache add registry.k8s.io/pause:3.1: (1.344408789s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-723696 cache add registry.k8s.io/pause:3.3: (1.338227391s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-723696 cache add registry.k8s.io/pause:latest: (1.339867514s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-723696 /tmp/TestFunctionalserialCacheCmdcacheadd_local1927431547/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 cache add minikube-local-cache-test:functional-723696
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 cache delete minikube-local-cache-test:functional-723696
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-723696
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-723696 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (319.169658ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-723696 cache reload: (1.201581141s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 kubectl -- --context functional-723696 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-723696 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-723696 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0821 11:11:49.759601 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-723696 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.248237046s)
functional_test.go:757: restart took 37.248327473s for "functional-723696" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-723696 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-723696 logs: (1.81310206s)
--- PASS: TestFunctional/serial/LogsCmd (1.81s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 logs --file /tmp/TestFunctionalserialLogsFileCmd1067422203/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-723696 logs --file /tmp/TestFunctionalserialLogsFileCmd1067422203/001/logs.txt: (1.864029245s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.86s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.38s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-723696 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-723696
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-723696: exit status 115 (610.248744ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31115 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-723696 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.38s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-723696 config get cpus: exit status 14 (78.863195ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-723696 config get cpus: exit status 14 (70.641778ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-723696 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-723696 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (215.491091ms)

                                                
                                                
-- stdout --
	* [functional-723696] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 11:13:28.314218 2766562 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:13:28.314414 2766562 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:13:28.314424 2766562 out.go:309] Setting ErrFile to fd 2...
	I0821 11:13:28.314430 2766562 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:13:28.314754 2766562 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
	I0821 11:13:28.315207 2766562 out.go:303] Setting JSON to false
	I0821 11:13:28.316478 2766562 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":71752,"bootTime":1692544656,"procs":374,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0821 11:13:28.316557 2766562 start.go:138] virtualization:  
	I0821 11:13:28.319837 2766562 out.go:177] * [functional-723696] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0821 11:13:28.322811 2766562 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 11:13:28.324835 2766562 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 11:13:28.322996 2766562 notify.go:220] Checking for updates...
	I0821 11:13:28.328831 2766562 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:13:28.331110 2766562 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	I0821 11:13:28.333099 2766562 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0821 11:13:28.335511 2766562 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 11:13:28.337930 2766562 config.go:182] Loaded profile config "functional-723696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 11:13:28.338533 2766562 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 11:13:28.368605 2766562 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 11:13:28.368704 2766562 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:13:28.451047 2766562 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-08-21 11:13:28.440753466 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:13:28.451154 2766562 docker.go:294] overlay module found
	I0821 11:13:28.454599 2766562 out.go:177] * Using the docker driver based on existing profile
	I0821 11:13:28.456451 2766562 start.go:298] selected driver: docker
	I0821 11:13:28.456473 2766562 start.go:902] validating driver "docker" against &{Name:functional-723696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-723696 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:13:28.456593 2766562 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 11:13:28.459183 2766562 out.go:177] 
	W0821 11:13:28.461139 2766562 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0821 11:13:28.463084 2766562 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-723696 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-723696 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-723696 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (202.291813ms)

                                                
                                                
-- stdout --
	* [functional-723696] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 11:13:28.790281 2766669 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:13:28.790462 2766669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:13:28.790491 2766669 out.go:309] Setting ErrFile to fd 2...
	I0821 11:13:28.790515 2766669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:13:28.792143 2766669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
	I0821 11:13:28.792572 2766669 out.go:303] Setting JSON to false
	I0821 11:13:28.793687 2766669 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":71753,"bootTime":1692544656,"procs":374,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0821 11:13:28.793800 2766669 start.go:138] virtualization:  
	I0821 11:13:28.796207 2766669 out.go:177] * [functional-723696] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	I0821 11:13:28.798659 2766669 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 11:13:28.800441 2766669 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 11:13:28.798873 2766669 notify.go:220] Checking for updates...
	I0821 11:13:28.802517 2766669 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:13:28.804468 2766669 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	I0821 11:13:28.806433 2766669 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0821 11:13:28.808496 2766669 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 11:13:28.811763 2766669 config.go:182] Loaded profile config "functional-723696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 11:13:28.812533 2766669 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 11:13:28.836065 2766669 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 11:13:28.836176 2766669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:13:28.921851 2766669 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-08-21 11:13:28.912493943 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:13:28.921964 2766669 docker.go:294] overlay module found
	I0821 11:13:28.924320 2766669 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0821 11:13:28.926337 2766669 start.go:298] selected driver: docker
	I0821 11:13:28.926357 2766669 start.go:902] validating driver "docker" against &{Name:functional-723696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.4 ClusterName:functional-723696 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0821 11:13:28.926475 2766669 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 11:13:28.929065 2766669 out.go:177] 
	W0821 11:13:28.931037 2766669 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0821 11:13:28.933353 2766669 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-723696 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-723696 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-hhhp9" [7942f5ae-fd4b-4c30-aef5-9a97882b5bda] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-hhhp9" [7942f5ae-fd4b-4c30-aef5-9a97882b5bda] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.026716106s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:32265
functional_test.go:1674: http://192.168.49.2:32265: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58d66798bb-hhhp9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32265
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6d9eebe5-8131-45eb-aed6-af1ee37b6d83] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.012502458s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-723696 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-723696 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-723696 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-723696 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [18e70554-21f6-4b11-87d5-da4704efecaa] Pending
helpers_test.go:344: "sp-pod" [18e70554-21f6-4b11-87d5-da4704efecaa] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0821 11:13:11.680325 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [18e70554-21f6-4b11-87d5-da4704efecaa] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.023551362s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-723696 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-723696 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-723696 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3cee85d7-3bad-4ba9-b4f7-8c6199a4dadc] Pending
helpers_test.go:344: "sp-pod" [3cee85d7-3bad-4ba9-b4f7-8c6199a4dadc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3cee85d7-3bad-4ba9-b4f7-8c6199a4dadc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.023457502s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-723696 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.61s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh -n functional-723696 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 cp functional-723696:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd148127642/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh -n functional-723696 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/2739930/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "sudo cat /etc/test/nested/copy/2739930/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/2739930.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "sudo cat /etc/ssl/certs/2739930.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/2739930.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "sudo cat /usr/share/ca-certificates/2739930.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/27399302.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "sudo cat /etc/ssl/certs/27399302.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/27399302.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "sudo cat /usr/share/ca-certificates/27399302.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-723696 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-723696 ssh "sudo systemctl is-active docker": exit status 1 (406.016762ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-723696 ssh "sudo systemctl is-active containerd": exit status 1 (299.343043ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-723696 version -o=json --components: (1.217991361s)
--- PASS: TestFunctional/parallel/Version/components (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-723696 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.4
registry.k8s.io/kube-proxy:v1.27.4
registry.k8s.io/kube-controller-manager:v1.27.4
registry.k8s.io/kube-apiserver:v1.27.4
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-723696
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-723696 image ls --format short --alsologtostderr:
I0821 11:13:36.831044 2767635 out.go:296] Setting OutFile to fd 1 ...
I0821 11:13:36.831326 2767635 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 11:13:36.831355 2767635 out.go:309] Setting ErrFile to fd 2...
I0821 11:13:36.831374 2767635 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 11:13:36.831650 2767635 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
I0821 11:13:36.832439 2767635 config.go:182] Loaded profile config "functional-723696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 11:13:36.832719 2767635 config.go:182] Loaded profile config "functional-723696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 11:13:36.833232 2767635 cli_runner.go:164] Run: docker container inspect functional-723696 --format={{.State.Status}}
I0821 11:13:36.865146 2767635 ssh_runner.go:195] Run: systemctl --version
I0821 11:13:36.865193 2767635 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-723696
I0821 11:13:36.895618 2767635 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36198 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/functional-723696/id_rsa Username:docker}
I0821 11:13:37.023487 2767635 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-723696 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| registry.k8s.io/kube-controller-manager | v1.27.4            | 389f6f052cf83 | 109MB  |
| registry.k8s.io/kube-proxy              | v1.27.4            | 532e5a30e948f | 68.1MB |
| registry.k8s.io/kube-scheduler          | v1.27.4            | 6eb63895cb67f | 57.6MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b18bf71b941ba | 60.9MB |
| registry.k8s.io/kube-apiserver          | v1.27.4            | 64aece92d6bde | 116MB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/library/nginx                 | alpine             | 397432849901d | 45.3MB |
| docker.io/library/nginx                 | latest             | ab73c7fd67234 | 196MB  |
| gcr.io/google-containers/addon-resizer  | functional-723696  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/my-image                      | functional-723696  | 9e2b36c563c39 | 1.64MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.7-0            | 24bc64e911039 | 182MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-723696 image ls --format table --alsologtostderr:
I0821 11:13:41.597488 2768071 out.go:296] Setting OutFile to fd 1 ...
I0821 11:13:41.597728 2768071 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 11:13:41.597753 2768071 out.go:309] Setting ErrFile to fd 2...
I0821 11:13:41.597772 2768071 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 11:13:41.598055 2768071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
I0821 11:13:41.598882 2768071 config.go:182] Loaded profile config "functional-723696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 11:13:41.599080 2768071 config.go:182] Loaded profile config "functional-723696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 11:13:41.599623 2768071 cli_runner.go:164] Run: docker container inspect functional-723696 --format={{.State.Status}}
I0821 11:13:41.619776 2768071 ssh_runner.go:195] Run: systemctl --version
I0821 11:13:41.619835 2768071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-723696
I0821 11:13:41.637524 2768071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36198 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/functional-723696/id_rsa Username:docker}
I0821 11:13:41.727391 2768071 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-723696 image ls --format json --alsologtostderr:
[{"id":"6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085","repoDigests":["registry.k8s.io/kube-scheduler@sha256:516cd341872a8d3c967df9a69eeff664651efbb9df438f8dce6bf3ab430f26f8","registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af"],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.4"],"size":"57615158"},{"id":"92f5e84b844a35d2d952c68cbdbeb8b38beece818bad1efee34ce0d95c8a74bc","repoDigests":["docker.io/library/c8f58f906e20ba1ec1cb1c8bf0eee6c69c37c0fff96b184d94470ff72b5f6ef5-tmp@sha256:b0d5c2667cf551cb573296a2d80377efeb9e2fbf0497b7e443a788c94c62d652"],"repoTags":[],"size":"1637644"},{"id":"397432849901d4b78b8fda5db7d50e074ac273977a4a78ce47ad069d4a15e091","repoDigests":["docker.io/library/nginx@sha256:cac882be2b7305e0c8d3e3cd0575a2fd58f5fde6dd5d6299605aa0f3e67ca385","docker.io/library/nginx@sha256:dd310aff240d2900d6a16614060392a741d4db0823cfe3e94ef80105b7e5983c"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45265715"},{"id":"ffd4
cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-723696"],"size":"34114467"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1
.10.1"],"size":"51393451"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d","registry.k8s.io/kube-apiserver@sha256:f65711310c4a5a305faecd8630aeee145cda14bee3a018967c02a1495170e815"],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.4"],"size":"116270032"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:
3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"9e2b36c563c392261e350f0a512d9a87472913daa94c1d7060d981dabeba1ac1","r
epoDigests":["localhost/my-image@sha256:18e3bd6e0f5956d3652fae674f33a0463560884920deb3d643094a5ebdc7c587"],"repoTags":["localhost/my-image:functional-723696"],"size":"1640225"},{"id":"389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265","registry.k8s.io/kube-controller-manager@sha256:955b498eda0646d58e6d15e1156da8ac731dedf1a9a47b5fbccce0d5e29fd3fd"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.4"],"size":"108667702"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79","repoDigests":["docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f","docker.io/k
indest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"60881430"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"ab73c7fd672341e41ec600081253d0b99ea31d0c1acdfb46a1485004472da7ac","repoDigests":["docker.io/library/nginx@sha256:104c7c5c54f2685f0f46f3be607ce60da7085da3eaa5ad22d
3d9f01594295e9c","docker.io/library/nginx@sha256:d204087971390839f077afcaa4f5a771c1694610f0f7cb13a2d2a3aa520b053f"],"repoTags":["docker.io/library/nginx:latest"],"size":"196196622"},{"id":"24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","repoDigests":["registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd","registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83"],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"182283991"},{"id":"532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317","repoDigests":["registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf","registry.k8s.io/kube-proxy@sha256:f22b84e066d9bb46451754c220ae6f85bfaf4b661636af4bcc22c221f9b8ccca"],"repoTags":["registry.k8s.io/kube-proxy:v1.27.4"],"size":"68099991"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff80
17007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-723696 image ls --format json --alsologtostderr:
I0821 11:13:41.347287 2768045 out.go:296] Setting OutFile to fd 1 ...
I0821 11:13:41.347426 2768045 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 11:13:41.347434 2768045 out.go:309] Setting ErrFile to fd 2...
I0821 11:13:41.347439 2768045 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 11:13:41.347709 2768045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
I0821 11:13:41.348313 2768045 config.go:182] Loaded profile config "functional-723696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 11:13:41.348433 2768045 config.go:182] Loaded profile config "functional-723696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 11:13:41.348876 2768045 cli_runner.go:164] Run: docker container inspect functional-723696 --format={{.State.Status}}
I0821 11:13:41.374949 2768045 ssh_runner.go:195] Run: systemctl --version
I0821 11:13:41.375006 2768045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-723696
I0821 11:13:41.395544 2768045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36198 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/functional-723696/id_rsa Username:docker}
I0821 11:13:41.491651 2768045 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-723696 image ls --format yaml --alsologtostderr:
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 397432849901d4b78b8fda5db7d50e074ac273977a4a78ce47ad069d4a15e091
repoDigests:
- docker.io/library/nginx@sha256:cac882be2b7305e0c8d3e3cd0575a2fd58f5fde6dd5d6299605aa0f3e67ca385
- docker.io/library/nginx@sha256:dd310aff240d2900d6a16614060392a741d4db0823cfe3e94ef80105b7e5983c
repoTags:
- docker.io/library/nginx:alpine
size: "45265715"
- id: 24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests:
- registry.k8s.io/etcd@sha256:1c19137e8a1716ce9f66c8c767bf114d7cad975db7a9784146486aa764f6dddd
- registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "182283991"
- id: 64aece92d6bde5b472d8185fcd2d5ab1add8814923a26561821f7cab5e819388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:697cd88d94f7f2ef42144cb3072b016dcb2e9251f0e7d41a7fede557e555452d
- registry.k8s.io/kube-apiserver@sha256:f65711310c4a5a305faecd8630aeee145cda14bee3a018967c02a1495170e815
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.4
size: "116270032"
- id: 389f6f052cf83156f82a2bbbf6ea2c24292d246b58900d91f6a1707eacf510b2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6286e500782ad6d0b37a1b8be57fc73f597dc931dfc73ff18ce534059803b265
- registry.k8s.io/kube-controller-manager@sha256:955b498eda0646d58e6d15e1156da8ac731dedf1a9a47b5fbccce0d5e29fd3fd
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.4
size: "108667702"
- id: b18bf71b941bae2e12db1c07e567ad14e4febbc778310a0fc64487f1ac877d79
repoDigests:
- docker.io/kindest/kindnetd@sha256:2c39858b71cf6c5737ff0daa8130a6574d4c6bd2a7dacaf002060c02f2bc1b4f
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "60881430"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 532e5a30e948f1c084333316b13e68fbeff8df667f3830b082005127a6d86317
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4bcb707da9898d2625f5d4edc6d0c96519a24f16db914fc673aa8f97e41dbabf
- registry.k8s.io/kube-proxy@sha256:f22b84e066d9bb46451754c220ae6f85bfaf4b661636af4bcc22c221f9b8ccca
repoTags:
- registry.k8s.io/kube-proxy:v1.27.4
size: "68099991"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-723696
size: "34114467"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 6eb63895cb67fce76da3ed6eaaa865ff55e7c761c9e6a691a83855ff0987a085
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:516cd341872a8d3c967df9a69eeff664651efbb9df438f8dce6bf3ab430f26f8
- registry.k8s.io/kube-scheduler@sha256:5897d7a97d23dce25cbf36fcd6e919180a8ef904bf5156583ffdb6a733ab04af
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.4
size: "57615158"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: ab73c7fd672341e41ec600081253d0b99ea31d0c1acdfb46a1485004472da7ac
repoDigests:
- docker.io/library/nginx@sha256:104c7c5c54f2685f0f46f3be607ce60da7085da3eaa5ad22d3d9f01594295e9c
- docker.io/library/nginx@sha256:d204087971390839f077afcaa4f5a771c1694610f0f7cb13a2d2a3aa520b053f
repoTags:
- docker.io/library/nginx:latest
size: "196196622"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-723696 image ls --format yaml --alsologtostderr:
I0821 11:13:37.185370 2767669 out.go:296] Setting OutFile to fd 1 ...
I0821 11:13:37.185597 2767669 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 11:13:37.185638 2767669 out.go:309] Setting ErrFile to fd 2...
I0821 11:13:37.185658 2767669 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 11:13:37.186092 2767669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
I0821 11:13:37.186934 2767669 config.go:182] Loaded profile config "functional-723696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 11:13:37.187101 2767669 config.go:182] Loaded profile config "functional-723696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 11:13:37.187594 2767669 cli_runner.go:164] Run: docker container inspect functional-723696 --format={{.State.Status}}
I0821 11:13:37.222180 2767669 ssh_runner.go:195] Run: systemctl --version
I0821 11:13:37.222234 2767669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-723696
I0821 11:13:37.266493 2767669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36198 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/functional-723696/id_rsa Username:docker}
I0821 11:13:37.391751 2767669 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-723696 ssh pgrep buildkitd: exit status 1 (421.903815ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 image build -t localhost/my-image:functional-723696 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-723696 image build -t localhost/my-image:functional-723696 testdata/build --alsologtostderr: (3.143004732s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-723696 image build -t localhost/my-image:functional-723696 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 92f5e84b844
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-723696
--> 9e2b36c563c
Successfully tagged localhost/my-image:functional-723696
9e2b36c563c392261e350f0a512d9a87472913daa94c1d7060d981dabeba1ac1
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-723696 image build -t localhost/my-image:functional-723696 testdata/build --alsologtostderr:
I0821 11:13:37.998919 2767747 out.go:296] Setting OutFile to fd 1 ...
I0821 11:13:38.002801 2767747 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 11:13:38.002875 2767747 out.go:309] Setting ErrFile to fd 2...
I0821 11:13:38.002899 2767747 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0821 11:13:38.003231 2767747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
I0821 11:13:38.004028 2767747 config.go:182] Loaded profile config "functional-723696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 11:13:38.006638 2767747 config.go:182] Loaded profile config "functional-723696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
I0821 11:13:38.007162 2767747 cli_runner.go:164] Run: docker container inspect functional-723696 --format={{.State.Status}}
I0821 11:13:38.036245 2767747 ssh_runner.go:195] Run: systemctl --version
I0821 11:13:38.036296 2767747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-723696
I0821 11:13:38.062801 2767747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36198 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/functional-723696/id_rsa Username:docker}
I0821 11:13:38.183594 2767747 build_images.go:151] Building image from path: /tmp/build.2865449116.tar
I0821 11:13:38.183669 2767747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0821 11:13:38.228611 2767747 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2865449116.tar
I0821 11:13:38.238257 2767747 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2865449116.tar: stat -c "%s %y" /var/lib/minikube/build/build.2865449116.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2865449116.tar': No such file or directory
I0821 11:13:38.238330 2767747 ssh_runner.go:362] scp /tmp/build.2865449116.tar --> /var/lib/minikube/build/build.2865449116.tar (3072 bytes)
I0821 11:13:38.339081 2767747 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2865449116
I0821 11:13:38.378313 2767747 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2865449116 -xf /var/lib/minikube/build/build.2865449116.tar
I0821 11:13:38.401042 2767747 crio.go:297] Building image: /var/lib/minikube/build/build.2865449116
I0821 11:13:38.401155 2767747 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-723696 /var/lib/minikube/build/build.2865449116 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0821 11:13:41.023669 2767747 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-723696 /var/lib/minikube/build/build.2865449116 --cgroup-manager=cgroupfs: (2.622477876s)
I0821 11:13:41.023739 2767747 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2865449116
I0821 11:13:41.035593 2767747 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2865449116.tar
I0821 11:13:41.046697 2767747 build_images.go:207] Built localhost/my-image:functional-723696 from /tmp/build.2865449116.tar
I0821 11:13:41.046789 2767747 build_images.go:123] succeeded building to: functional-723696
I0821 11:13:41.046803 2767747 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.782366542s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-723696
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 image load --daemon gcr.io/google-containers/addon-resizer:functional-723696 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-723696 image load --daemon gcr.io/google-containers/addon-resizer:functional-723696 --alsologtostderr: (5.640197445s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.93s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-723696 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-723696 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-kss6r" [609c2d72-cff6-4e76-baf8-5594342c3cee] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-kss6r" [609c2d72-cff6-4e76-baf8-5594342c3cee] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.03477335s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 image load --daemon gcr.io/google-containers/addon-resizer:functional-723696 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-723696 image load --daemon gcr.io/google-containers/addon-resizer:functional-723696 --alsologtostderr: (2.775994099s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.398906248s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-723696
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 image load --daemon gcr.io/google-containers/addon-resizer:functional-723696 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-723696 image load --daemon gcr.io/google-containers/addon-resizer:functional-723696 --alsologtostderr: (4.226546272s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 service list -o json
functional_test.go:1493: Took "424.340696ms" to run "out/minikube-linux-arm64 -p functional-723696 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30723
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30723
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-723696 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-723696 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-723696 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2764093: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-723696 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 image save gcr.io/google-containers/addon-resizer:functional-723696 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-arm64 -p functional-723696 image save gcr.io/google-containers/addon-resizer:functional-723696 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.285196256s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-723696 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-723696 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d9d4ed22-ebde-4232-aef7-01c7d11c7a1d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d9d4ed22-ebde-4232-aef7-01c7d11c7a1d] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.021639058s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 image rm gcr.io/google-containers/addon-resizer:functional-723696 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-723696 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.669930996s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-723696
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 image save --daemon gcr.io/google-containers/addon-resizer:functional-723696 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-723696
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-723696 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.212.84 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-723696 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "347.305479ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "52.435101ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "359.627613ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "61.628789ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-723696 /tmp/TestFunctionalparallelMountCmdany-port622117732/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1692616395118150007" to /tmp/TestFunctionalparallelMountCmdany-port622117732/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1692616395118150007" to /tmp/TestFunctionalparallelMountCmdany-port622117732/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1692616395118150007" to /tmp/TestFunctionalparallelMountCmdany-port622117732/001/test-1692616395118150007
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-723696 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (375.738628ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 21 11:13 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 21 11:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 21 11:13 test-1692616395118150007
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh cat /mount-9p/test-1692616395118150007
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-723696 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [cee3c053-5d88-4ac5-bd98-a508f77cf8ec] Pending
helpers_test.go:344: "busybox-mount" [cee3c053-5d88-4ac5-bd98-a508f77cf8ec] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [cee3c053-5d88-4ac5-bd98-a508f77cf8ec] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [cee3c053-5d88-4ac5-bd98-a508f77cf8ec] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.014717855s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-723696 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-723696 /tmp/TestFunctionalparallelMountCmdany-port622117732/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-723696 /tmp/TestFunctionalparallelMountCmdspecific-port1162817450/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-723696 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (593.742621ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-723696 /tmp/TestFunctionalparallelMountCmdspecific-port1162817450/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-723696 ssh "sudo umount -f /mount-9p": exit status 1 (349.937262ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-723696 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-723696 /tmp/TestFunctionalparallelMountCmdspecific-port1162817450/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-723696 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1734868876/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-723696 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1734868876/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-723696 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1734868876/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-723696 ssh "findmnt -T" /mount1: exit status 1 (707.538634ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-723696 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-723696 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-723696 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1734868876/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-723696 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1734868876/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-723696 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1734868876/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.94s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-723696
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-723696
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-723696
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (90.4s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-354854 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-354854 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m30.400603429s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (90.40s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.5s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-354854 addons enable ingress --alsologtostderr -v=5
E0821 11:15:27.830486 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-354854 addons enable ingress --alsologtostderr -v=5: (11.503096682s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (11.50s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.83s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-354854 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-149184 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0821 11:19:01.780052 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-149184 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m15.773364285s)
--- PASS: TestJSONOutput/start/Command (75.78s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.81s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-149184 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.81s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-149184 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.97s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-149184 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-149184 --output=json --user=testUser: (5.970868645s)
--- PASS: TestJSONOutput/stop/Command (5.97s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.28s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-719110 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-719110 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (110.664665ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3e8adc4f-3fb3-42d1-aa99-d0fe9fb41e2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-719110] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"428930c4-02f4-4604-ad2e-e1067c32a11d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17102"}}
	{"specversion":"1.0","id":"b184c3dd-1fa1-48a0-8e31-cf11530fa63e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"66f4857e-639d-4fe1-9e8b-df3e05fd8973","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig"}}
	{"specversion":"1.0","id":"40034211-d056-41dd-8a63-e5e8c38c4b18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube"}}
	{"specversion":"1.0","id":"39abb39f-cfa2-4f39-a389-5850b2834a72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"d3c352cd-6509-4943-88d9-32b18e64cd63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"af86f2d3-4421-4d77-ad6f-c73cb654861d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-719110" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-719110
--- PASS: TestErrorJSONOutput (0.28s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.59s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-400829 --network=
E0821 11:20:23.701768 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
E0821 11:20:27.830750 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 11:20:31.841990 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
E0821 11:20:31.847558 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
E0821 11:20:31.857806 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
E0821 11:20:31.878050 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
E0821 11:20:31.918314 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
E0821 11:20:31.998562 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
E0821 11:20:32.158914 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
E0821 11:20:32.479434 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
E0821 11:20:33.120319 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
E0821 11:20:34.400540 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
E0821 11:20:36.961033 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
E0821 11:20:42.081578 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-400829 --network=: (40.469220058s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-400829" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-400829
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-400829: (2.093006083s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.59s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.01s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-019684 --network=bridge
E0821 11:20:52.322527 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
E0821 11:21:12.803066 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-019684 --network=bridge: (31.01823509s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-019684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-019684
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-019684: (1.963447321s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.01s)

                                                
                                    
x
+
TestKicExistingNetwork (35.03s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-217959 --network=existing-network
E0821 11:21:53.763832 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-217959 --network=existing-network: (32.861367546s)
helpers_test.go:175: Cleaning up "existing-network-217959" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-217959
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-217959: (2.012098894s)
--- PASS: TestKicExistingNetwork (35.03s)

                                                
                                    
x
+
TestKicCustomSubnet (39.72s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-219820 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-219820 --subnet=192.168.60.0/24: (37.534845089s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-219820 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-219820" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-219820
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-219820: (2.159927869s)
--- PASS: TestKicCustomSubnet (39.72s)

                                                
                                    
x
+
TestKicStaticIP (36.72s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-490452 --static-ip=192.168.200.200
E0821 11:22:39.857713 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
E0821 11:23:07.542043 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-490452 --static-ip=192.168.200.200: (34.517580009s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-490452 ip
helpers_test.go:175: Cleaning up "static-ip-490452" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-490452
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-490452: (2.040386893s)
--- PASS: TestKicStaticIP (36.72s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (73.58s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-062426 --driver=docker  --container-runtime=crio
E0821 11:23:15.684916 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-062426 --driver=docker  --container-runtime=crio: (37.14050358s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-064912 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-064912 --driver=docker  --container-runtime=crio: (31.331061296s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-062426
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-064912
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-064912" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-064912
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-064912: (1.928349064s)
helpers_test.go:175: Cleaning up "first-062426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-062426
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-062426: (1.943049859s)
--- PASS: TestMinikubeProfile (73.58s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-191844 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-191844 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.111039422s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-191844 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-193723 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-193723 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.89843869s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-193723 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-191844 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-191844 --alsologtostderr -v=5: (1.708609199s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-193723 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-193723
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-193723: (1.264721007s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.09s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-193723
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-193723: (7.088757081s)
--- PASS: TestMountStart/serial/RestartStopped (8.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-193723 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (125.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-994910 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0821 11:25:27.830900 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 11:25:31.842430 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
E0821 11:25:59.525677 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
E0821 11:26:50.881434 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-994910 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m4.875173735s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (125.42s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994910 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994910 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-994910 -- rollout status deployment/busybox: (3.744322524s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994910 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994910 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994910 -- exec busybox-67b7f59bb-46dlp -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994910 -- exec busybox-67b7f59bb-zhpmt -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994910 -- exec busybox-67b7f59bb-46dlp -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994910 -- exec busybox-67b7f59bb-zhpmt -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994910 -- exec busybox-67b7f59bb-46dlp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-994910 -- exec busybox-67b7f59bb-zhpmt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-994910 -v 3 --alsologtostderr
E0821 11:27:39.857757 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-994910 -v 3 --alsologtostderr: (46.961231627s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.66s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 cp testdata/cp-test.txt multinode-994910:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 ssh -n multinode-994910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 cp multinode-994910:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile104633443/001/cp-test_multinode-994910.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 ssh -n multinode-994910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 cp multinode-994910:/home/docker/cp-test.txt multinode-994910-m02:/home/docker/cp-test_multinode-994910_multinode-994910-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 ssh -n multinode-994910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 ssh -n multinode-994910-m02 "sudo cat /home/docker/cp-test_multinode-994910_multinode-994910-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 cp multinode-994910:/home/docker/cp-test.txt multinode-994910-m03:/home/docker/cp-test_multinode-994910_multinode-994910-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 ssh -n multinode-994910 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 ssh -n multinode-994910-m03 "sudo cat /home/docker/cp-test_multinode-994910_multinode-994910-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 cp testdata/cp-test.txt multinode-994910-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 ssh -n multinode-994910-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 cp multinode-994910-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile104633443/001/cp-test_multinode-994910-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 ssh -n multinode-994910-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 cp multinode-994910-m02:/home/docker/cp-test.txt multinode-994910:/home/docker/cp-test_multinode-994910-m02_multinode-994910.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 ssh -n multinode-994910-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 ssh -n multinode-994910 "sudo cat /home/docker/cp-test_multinode-994910-m02_multinode-994910.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 cp multinode-994910-m02:/home/docker/cp-test.txt multinode-994910-m03:/home/docker/cp-test_multinode-994910-m02_multinode-994910-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 ssh -n multinode-994910-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 ssh -n multinode-994910-m03 "sudo cat /home/docker/cp-test_multinode-994910-m02_multinode-994910-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 cp testdata/cp-test.txt multinode-994910-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 ssh -n multinode-994910-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 cp multinode-994910-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile104633443/001/cp-test_multinode-994910-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 ssh -n multinode-994910-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 cp multinode-994910-m03:/home/docker/cp-test.txt multinode-994910:/home/docker/cp-test_multinode-994910-m03_multinode-994910.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 ssh -n multinode-994910-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 ssh -n multinode-994910 "sudo cat /home/docker/cp-test_multinode-994910-m03_multinode-994910.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 cp multinode-994910-m03:/home/docker/cp-test.txt multinode-994910-m02:/home/docker/cp-test_multinode-994910-m03_multinode-994910-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 ssh -n multinode-994910-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 ssh -n multinode-994910-m02 "sudo cat /home/docker/cp-test_multinode-994910-m03_multinode-994910-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-994910 node stop m03: (1.236179768s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-994910 status: exit status 7 (541.157393ms)

                                                
                                                
-- stdout --
	multinode-994910
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-994910-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-994910-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-994910 status --alsologtostderr: exit status 7 (524.156952ms)

                                                
                                                
-- stdout --
	multinode-994910
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-994910-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-994910-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 11:28:12.882051 2814469 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:28:12.882164 2814469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:28:12.882172 2814469 out.go:309] Setting ErrFile to fd 2...
	I0821 11:28:12.882177 2814469 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:28:12.882467 2814469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
	I0821 11:28:12.882640 2814469 out.go:303] Setting JSON to false
	I0821 11:28:12.882715 2814469 mustload.go:65] Loading cluster: multinode-994910
	I0821 11:28:12.882830 2814469 notify.go:220] Checking for updates...
	I0821 11:28:12.883121 2814469 config.go:182] Loaded profile config "multinode-994910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 11:28:12.883151 2814469 status.go:255] checking status of multinode-994910 ...
	I0821 11:28:12.883611 2814469 cli_runner.go:164] Run: docker container inspect multinode-994910 --format={{.State.Status}}
	I0821 11:28:12.903996 2814469 status.go:330] multinode-994910 host status = "Running" (err=<nil>)
	I0821 11:28:12.904040 2814469 host.go:66] Checking if "multinode-994910" exists ...
	I0821 11:28:12.904325 2814469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-994910
	I0821 11:28:12.922756 2814469 host.go:66] Checking if "multinode-994910" exists ...
	I0821 11:28:12.923056 2814469 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 11:28:12.923112 2814469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910
	I0821 11:28:12.953472 2814469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36263 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910/id_rsa Username:docker}
	I0821 11:28:13.045017 2814469 ssh_runner.go:195] Run: systemctl --version
	I0821 11:28:13.050795 2814469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 11:28:13.064698 2814469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:28:13.131590 2814469 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-08-21 11:28:13.121230108 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:28:13.132170 2814469 kubeconfig.go:92] found "multinode-994910" server: "https://192.168.58.2:8443"
	I0821 11:28:13.132193 2814469 api_server.go:166] Checking apiserver status ...
	I0821 11:28:13.132236 2814469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0821 11:28:13.144974 2814469 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1232/cgroup
	I0821 11:28:13.156601 2814469 api_server.go:182] apiserver freezer: "5:freezer:/docker/044a79616bc979dbd0194b96cc19bbb9942147959a549722da18d30526d96040/crio/crio-03559917ca751a7c1690b8ba39297628d6e72894d48c15fc4c11b1c73864ca57"
	I0821 11:28:13.156666 2814469 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/044a79616bc979dbd0194b96cc19bbb9942147959a549722da18d30526d96040/crio/crio-03559917ca751a7c1690b8ba39297628d6e72894d48c15fc4c11b1c73864ca57/freezer.state
	I0821 11:28:13.166968 2814469 api_server.go:204] freezer state: "THAWED"
	I0821 11:28:13.167008 2814469 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0821 11:28:13.175839 2814469 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0821 11:28:13.175864 2814469 status.go:421] multinode-994910 apiserver status = Running (err=<nil>)
	I0821 11:28:13.175875 2814469 status.go:257] multinode-994910 status: &{Name:multinode-994910 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0821 11:28:13.175896 2814469 status.go:255] checking status of multinode-994910-m02 ...
	I0821 11:28:13.176203 2814469 cli_runner.go:164] Run: docker container inspect multinode-994910-m02 --format={{.State.Status}}
	I0821 11:28:13.193229 2814469 status.go:330] multinode-994910-m02 host status = "Running" (err=<nil>)
	I0821 11:28:13.193255 2814469 host.go:66] Checking if "multinode-994910-m02" exists ...
	I0821 11:28:13.193552 2814469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-994910-m02
	I0821 11:28:13.211363 2814469 host.go:66] Checking if "multinode-994910-m02" exists ...
	I0821 11:28:13.211677 2814469 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0821 11:28:13.211735 2814469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-994910-m02
	I0821 11:28:13.228723 2814469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36268 SSHKeyPath:/home/jenkins/minikube-integration/17102-2734539/.minikube/machines/multinode-994910-m02/id_rsa Username:docker}
	I0821 11:28:13.320245 2814469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0821 11:28:13.334054 2814469 status.go:257] multinode-994910-m02 status: &{Name:multinode-994910-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0821 11:28:13.334089 2814469 status.go:255] checking status of multinode-994910-m03 ...
	I0821 11:28:13.334412 2814469 cli_runner.go:164] Run: docker container inspect multinode-994910-m03 --format={{.State.Status}}
	I0821 11:28:13.352010 2814469 status.go:330] multinode-994910-m03 host status = "Stopped" (err=<nil>)
	I0821 11:28:13.352029 2814469 status.go:343] host is not running, skipping remaining checks
	I0821 11:28:13.352037 2814469 status.go:257] multinode-994910-m03 status: &{Name:multinode-994910-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.30s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-994910 node start m03 --alsologtostderr: (11.358763601s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.19s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (123.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-994910
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-994910
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-994910: (25.098583275s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-994910 --wait=true -v=8 --alsologtostderr
E0821 11:30:27.831195 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-994910 --wait=true -v=8 --alsologtostderr: (1m38.279793159s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-994910
--- PASS: TestMultiNode/serial/RestartKeepsNodes (123.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 node delete m03
E0821 11:30:31.842407 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-994910 node delete m03: (4.302761652s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-994910 stop: (23.837773964s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-994910 status: exit status 7 (90.179387ms)

                                                
                                                
-- stdout --
	multinode-994910
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-994910-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-994910 status --alsologtostderr: exit status 7 (87.09651ms)

                                                
                                                
-- stdout --
	multinode-994910
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-994910-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 11:30:58.195129 2822532 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:30:58.195307 2822532 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:30:58.195337 2822532 out.go:309] Setting ErrFile to fd 2...
	I0821 11:30:58.195358 2822532 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:30:58.195595 2822532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
	I0821 11:30:58.195787 2822532 out.go:303] Setting JSON to false
	I0821 11:30:58.195926 2822532 mustload.go:65] Loading cluster: multinode-994910
	I0821 11:30:58.196013 2822532 notify.go:220] Checking for updates...
	I0821 11:30:58.196378 2822532 config.go:182] Loaded profile config "multinode-994910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 11:30:58.196414 2822532 status.go:255] checking status of multinode-994910 ...
	I0821 11:30:58.197145 2822532 cli_runner.go:164] Run: docker container inspect multinode-994910 --format={{.State.Status}}
	I0821 11:30:58.215173 2822532 status.go:330] multinode-994910 host status = "Stopped" (err=<nil>)
	I0821 11:30:58.215199 2822532 status.go:343] host is not running, skipping remaining checks
	I0821 11:30:58.215206 2822532 status.go:257] multinode-994910 status: &{Name:multinode-994910 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0821 11:30:58.215229 2822532 status.go:255] checking status of multinode-994910-m02 ...
	I0821 11:30:58.215554 2822532 cli_runner.go:164] Run: docker container inspect multinode-994910-m02 --format={{.State.Status}}
	I0821 11:30:58.232882 2822532 status.go:330] multinode-994910-m02 host status = "Stopped" (err=<nil>)
	I0821 11:30:58.232901 2822532 status.go:343] host is not running, skipping remaining checks
	I0821 11:30:58.232908 2822532 status.go:257] multinode-994910-m02 status: &{Name:multinode-994910-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (86.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-994910 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-994910 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m25.332134539s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-994910 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (86.07s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-994910
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-994910-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-994910-m02 --driver=docker  --container-runtime=crio: exit status 14 (79.17506ms)

                                                
                                                
-- stdout --
	* [multinode-994910-m02] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-994910-m02' is duplicated with machine name 'multinode-994910-m02' in profile 'multinode-994910'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-994910-m03 --driver=docker  --container-runtime=crio
E0821 11:32:39.857764 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-994910-m03 --driver=docker  --container-runtime=crio: (32.867700949s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-994910
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-994910: exit status 80 (332.322706ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-994910
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-994910-m03 already exists in multinode-994910-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-994910-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-994910-m03: (1.944632585s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.28s)

                                                
                                    
x
+
TestPreload (180.36s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-567183 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0821 11:34:02.903049 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-567183 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m39.239086109s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-567183 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-567183 image pull gcr.io/k8s-minikube/busybox: (2.075482986s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-567183
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-567183: (5.846852652s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-567183 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0821 11:35:27.831130 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 11:35:31.841974 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-567183 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m10.610820841s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-567183 image list
helpers_test.go:175: Cleaning up "test-preload-567183" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-567183
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-567183: (2.342541626s)
--- PASS: TestPreload (180.36s)

                                                
                                    
x
+
TestScheduledStopUnix (107.51s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-359553 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-359553 --memory=2048 --driver=docker  --container-runtime=crio: (32.092688514s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-359553 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-359553 -n scheduled-stop-359553
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-359553 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-359553 --cancel-scheduled
E0821 11:36:54.888797 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-359553 -n scheduled-stop-359553
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-359553
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-359553 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0821 11:37:39.857935 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-359553
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-359553: exit status 7 (67.982503ms)

                                                
                                                
-- stdout --
	scheduled-stop-359553
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-359553 -n scheduled-stop-359553
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-359553 -n scheduled-stop-359553: exit status 7 (65.97396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-359553" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-359553
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-359553: (3.795574228s)
--- PASS: TestScheduledStopUnix (107.51s)

                                                
                                    
x
+
TestInsufficientStorage (10.91s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-839198 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-839198 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.395569029s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a0ed3a30-7547-4c53-b4a2-2d1a024a8a00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-839198] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0ff11194-7331-42c8-883a-b992e56f9fa3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17102"}}
	{"specversion":"1.0","id":"ab4872c4-7bb1-4ab4-82ed-a0996e4c544f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"16c8d957-eb4e-43ec-ad82-a3e45460d356","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig"}}
	{"specversion":"1.0","id":"97712d8e-8e3f-467a-8c5f-7774905707c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube"}}
	{"specversion":"1.0","id":"c29fe56a-d783-4279-8013-2cdf0b72b334","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b8df381c-7082-40ab-83a6-0dd33e939f55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8debfc12-0142-4c13-b2b4-0f0a04c58ddc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"983a9f65-ebe6-4f5d-b1a6-c094fb718250","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"bb92e06f-f03b-4500-b892-d0df3de6e2f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c2f381a1-fb16-4c44-99bf-2b1e70fef0ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"05d43445-3928-4427-a6be-34dd0c8d5723","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-839198 in cluster insufficient-storage-839198","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"756f9e5b-c74f-465e-a4c6-9f08b76590c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0e9d2040-9d3e-4463-881e-1c0496d26bc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"47c895ff-c2a1-45db-999e-2691b8121967","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-839198 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-839198 --output=json --layout=cluster: exit status 7 (313.008441ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-839198","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-839198","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0821 11:38:02.429033 2839446 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-839198" does not appear in /home/jenkins/minikube-integration/17102-2734539/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-839198 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-839198 --output=json --layout=cluster: exit status 7 (303.212852ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-839198","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-839198","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0821 11:38:02.732929 2839500 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-839198" does not appear in /home/jenkins/minikube-integration/17102-2734539/kubeconfig
	E0821 11:38:02.745357 2839500 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/insufficient-storage-839198/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-839198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-839198
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-839198: (1.896029319s)
--- PASS: TestInsufficientStorage (10.91s)

                                                
                                    
x
+
TestKubernetesUpgrade (131.89s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-831512 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:234: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-831512 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m4.521677481s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-831512
version_upgrade_test.go:239: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-831512: (1.305229874s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-831512 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-831512 status --format={{.Host}}: exit status 7 (76.100528ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-831512 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:255: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-831512 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.092176744s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-831512 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-831512 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-831512 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (243.111703ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-831512] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.0-rc.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-831512
	    minikube start -p kubernetes-upgrade-831512 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8315122 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-831512 --kubernetes-version=v1.28.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-831512 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:287: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-831512 --memory=2200 --kubernetes-version=v1.28.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.144675771s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-831512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-831512
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-831512: (2.306173826s)
--- PASS: TestKubernetesUpgrade (131.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-536895 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-536895 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (83.921739ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-536895] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-536895 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-536895 --driver=docker  --container-runtime=crio: (43.560259061s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-536895 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (11.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-536895 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-536895 --no-kubernetes --driver=docker  --container-runtime=crio: (8.648858042s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-536895 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-536895 status -o json: exit status 2 (433.63574ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-536895","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-536895
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-536895: (2.190747406s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (11.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-536895 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-536895 --no-kubernetes --driver=docker  --container-runtime=crio: (9.904136817s)
--- PASS: TestNoKubernetes/serial/Start (9.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-536895 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-536895 "sudo systemctl is-active --quiet service kubelet": exit status 1 (393.610606ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-536895
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-536895: (1.313864385s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-536895 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-536895 --driver=docker  --container-runtime=crio: (8.15647169s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-536895 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-536895 "sudo systemctl is-active --quiet service kubelet": exit status 1 (448.422335ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-816837
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.78s)

                                                
                                    
x
+
TestPause/serial/Start (55.83s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-736825 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-736825 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (55.830482914s)
--- PASS: TestPause/serial/Start (55.83s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (44.35s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-736825 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-736825 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.285095283s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (44.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-473827 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-473827 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (290.074168ms)

                                                
                                                
-- stdout --
	* [false-473827] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17102
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0821 11:43:33.441728 2871973 out.go:296] Setting OutFile to fd 1 ...
	I0821 11:43:33.441958 2871973 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:43:33.441984 2871973 out.go:309] Setting ErrFile to fd 2...
	I0821 11:43:33.442002 2871973 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0821 11:43:33.442282 2871973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17102-2734539/.minikube/bin
	I0821 11:43:33.442737 2871973 out.go:303] Setting JSON to false
	I0821 11:43:33.443940 2871973 start.go:128] hostinfo: {"hostname":"ip-172-31-31-251","uptime":73557,"bootTime":1692544656,"procs":400,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0821 11:43:33.444035 2871973 start.go:138] virtualization:  
	I0821 11:43:33.446699 2871973 out.go:177] * [false-473827] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0821 11:43:33.449234 2871973 out.go:177]   - MINIKUBE_LOCATION=17102
	I0821 11:43:33.451097 2871973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0821 11:43:33.449321 2871973 notify.go:220] Checking for updates...
	I0821 11:43:33.453074 2871973 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17102-2734539/kubeconfig
	I0821 11:43:33.454907 2871973 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17102-2734539/.minikube
	I0821 11:43:33.456664 2871973 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0821 11:43:33.458438 2871973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0821 11:43:33.461009 2871973 config.go:182] Loaded profile config "pause-736825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.27.4
	I0821 11:43:33.461098 2871973 driver.go:373] Setting default libvirt URI to qemu:///system
	I0821 11:43:33.517743 2871973 docker.go:121] docker version: linux-24.0.5:Docker Engine - Community
	I0821 11:43:33.517842 2871973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0821 11:43:33.658646 2871973 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-08-21 11:43:33.646043204 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215105536 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:24.0.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.20.2]] Warnings:<nil>}}
	I0821 11:43:33.658757 2871973 docker.go:294] overlay module found
	I0821 11:43:33.660836 2871973 out.go:177] * Using the docker driver based on user configuration
	I0821 11:43:33.662505 2871973 start.go:298] selected driver: docker
	I0821 11:43:33.662522 2871973 start.go:902] validating driver "docker" against <nil>
	I0821 11:43:33.662541 2871973 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0821 11:43:33.664949 2871973 out.go:177] 
	W0821 11:43:33.666553 2871973 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0821 11:43:33.668655 2871973 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-473827 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-473827

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-473827

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-473827

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-473827

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-473827

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-473827

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-473827

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-473827

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-473827

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-473827

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-473827

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-473827" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-473827" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 21 Aug 2023 11:43:28 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-736825
contexts:
- context:
cluster: pause-736825
extensions:
- extension:
last-update: Mon, 21 Aug 2023 11:43:28 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-736825
name: pause-736825
current-context: pause-736825
kind: Config
preferences: {}
users:
- name: pause-736825
user:
client-certificate: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/pause-736825/client.crt
client-key: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/pause-736825/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-473827

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-473827"

                                                
                                                
----------------------- debugLogs end: false-473827 [took: 3.374697819s] --------------------------------
helpers_test.go:175: Cleaning up "false-473827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-473827
--- PASS: TestNetworkPlugins/group/false (3.82s)

                                                
                                    
x
+
TestPause/serial/Pause (1.19s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-736825 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-736825 --alsologtostderr -v=5: (1.185668523s)
--- PASS: TestPause/serial/Pause (1.19s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-736825 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-736825 --output=json --layout=cluster: exit status 2 (405.537336ms)

                                                
                                                
-- stdout --
	{"Name":"pause-736825","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-736825","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.41s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.96s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-736825 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.96s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.39s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-736825 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-736825 --alsologtostderr -v=5: (1.386634503s)
--- PASS: TestPause/serial/PauseAgain (1.39s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.01s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-736825 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-736825 --alsologtostderr -v=5: (3.014416119s)
--- PASS: TestPause/serial/DeletePaused (3.01s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.58s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-736825
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-736825: exit status 1 (18.190671ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-736825: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (127.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-760796 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0821 11:45:27.830670 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 11:45:31.841951 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-760796 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m7.012143118s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (127.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-760796 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b6cf41f8-4571-4bb4-923e-f504bfa14cbb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b6cf41f8-4571-4bb4-923e-f504bfa14cbb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.032948623s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-760796 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-760796 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-760796 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-760796 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-760796 --alsologtostderr -v=3: (12.118392864s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-760796 -n old-k8s-version-760796
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-760796 -n old-k8s-version-760796: exit status 7 (72.139939ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-760796 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (81.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-760796 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-760796 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m20.980296714s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-760796 -n old-k8s-version-760796
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (81.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-180246 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-180246 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1: (1m11.32887924s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-hb9mr" [6ed107e2-6080-40e3-b5e2-bab76edb3458] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.032186332s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-hb9mr" [6ed107e2-6080-40e3-b5e2-bab76edb3458] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009845754s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-760796 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-760796 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-760796 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-760796 --alsologtostderr -v=1: (1.310090007s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-760796 -n old-k8s-version-760796
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-760796 -n old-k8s-version-760796: exit status 2 (559.338633ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-760796 -n old-k8s-version-760796
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-760796 -n old-k8s-version-760796: exit status 2 (462.061965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-760796 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-760796 -n old-k8s-version-760796
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-760796 -n old-k8s-version-760796
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-848104 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-848104 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4: (1m21.435034919s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-180246 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [54b66c9d-d6e6-4bff-b981-4fc5a11be599] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [54b66c9d-d6e6-4bff-b981-4fc5a11be599] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.042093352s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-180246 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-180246 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-180246 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.61597014s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-180246 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-180246 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-180246 --alsologtostderr -v=3: (12.428270181s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-180246 -n no-preload-180246
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-180246 -n no-preload-180246: exit status 7 (81.940194ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-180246 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (348.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-180246 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1
E0821 11:50:27.831347 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 11:50:31.842529 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-180246 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1: (5m47.400205272s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-180246 -n no-preload-180246
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (348.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-848104 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ef3418a2-65a5-46d0-a4c5-7007352a3c34] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ef3418a2-65a5-46d0-a4c5-7007352a3c34] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.039808773s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-848104 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-848104 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0821 11:50:42.903724 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-848104 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.105078149s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-848104 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-848104 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-848104 --alsologtostderr -v=3: (12.132094203s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-848104 -n embed-certs-848104
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-848104 -n embed-certs-848104: exit status 7 (85.742654ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-848104 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (354.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-848104 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4
E0821 11:52:08.156664 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/old-k8s-version-760796/client.crt: no such file or directory
E0821 11:52:08.162110 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/old-k8s-version-760796/client.crt: no such file or directory
E0821 11:52:08.172344 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/old-k8s-version-760796/client.crt: no such file or directory
E0821 11:52:08.192610 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/old-k8s-version-760796/client.crt: no such file or directory
E0821 11:52:08.232888 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/old-k8s-version-760796/client.crt: no such file or directory
E0821 11:52:08.313202 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/old-k8s-version-760796/client.crt: no such file or directory
E0821 11:52:08.473467 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/old-k8s-version-760796/client.crt: no such file or directory
E0821 11:52:08.794062 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/old-k8s-version-760796/client.crt: no such file or directory
E0821 11:52:09.434618 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/old-k8s-version-760796/client.crt: no such file or directory
E0821 11:52:10.715701 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/old-k8s-version-760796/client.crt: no such file or directory
E0821 11:52:13.276330 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/old-k8s-version-760796/client.crt: no such file or directory
E0821 11:52:18.396873 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/old-k8s-version-760796/client.crt: no such file or directory
E0821 11:52:28.637061 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/old-k8s-version-760796/client.crt: no such file or directory
E0821 11:52:39.857724 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
E0821 11:52:49.117978 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/old-k8s-version-760796/client.crt: no such file or directory
E0821 11:53:30.078620 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/old-k8s-version-760796/client.crt: no such file or directory
E0821 11:53:34.889576 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
E0821 11:54:51.999709 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/old-k8s-version-760796/client.crt: no such file or directory
E0821 11:55:27.831239 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 11:55:31.841796 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-848104 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4: (5m53.568954432s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-848104 -n embed-certs-848104
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (354.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-whgjr" [bc70f312-ebfe-4326-8ea0-7967c1502cb7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-whgjr" [bc70f312-ebfe-4326-8ea0-7967c1502cb7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.033578972s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-whgjr" [bc70f312-ebfe-4326-8ea0-7967c1502cb7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010871665s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-180246 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-180246 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-180246 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-180246 --alsologtostderr -v=1: (1.31199364s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-180246 -n no-preload-180246
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-180246 -n no-preload-180246: exit status 2 (472.077507ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-180246 -n no-preload-180246
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-180246 -n no-preload-180246: exit status 2 (462.727962ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-180246 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-180246 --alsologtostderr -v=1: (1.044026042s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-180246 -n no-preload-180246
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-180246 -n no-preload-180246
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-660717 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-660717 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4: (1m23.200699898s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-4t69t" [a33843ad-d121-4ffa-b5c4-3ec139ee01b0] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-4t69t" [a33843ad-d121-4ffa-b5c4-3ec139ee01b0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.032724571s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-4t69t" [a33843ad-d121-4ffa-b5c4-3ec139ee01b0] Running
E0821 11:57:08.157037 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/old-k8s-version-760796/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015193399s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-848104 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-848104 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-848104 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-848104 -n embed-certs-848104
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-848104 -n embed-certs-848104: exit status 2 (356.739101ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-848104 -n embed-certs-848104
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-848104 -n embed-certs-848104: exit status 2 (348.908199ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-848104 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-848104 -n embed-certs-848104
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-848104 -n embed-certs-848104
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-158212 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-158212 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1: (47.092553927s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-660717 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c69d107b-9069-466b-8a72-32c63e38a314] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c69d107b-9069-466b-8a72-32c63e38a314] Running
E0821 11:57:35.840318 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/old-k8s-version-760796/client.crt: no such file or directory
E0821 11:57:39.857699 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.063097178s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-660717 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-660717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-660717 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.897111452s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-660717 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-660717 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-660717 --alsologtostderr -v=3: (12.43458452s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-660717 -n default-k8s-diff-port-660717
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-660717 -n default-k8s-diff-port-660717: exit status 7 (81.924226ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-660717 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (353.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-660717 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-660717 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.27.4: (5m53.18072328s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-660717 -n default-k8s-diff-port-660717
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (353.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-158212 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-158212 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.077053301s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-158212 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-158212 --alsologtostderr -v=3: (1.261606942s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-158212 -n newest-cni-158212
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-158212 -n newest-cni-158212: exit status 7 (68.621206ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-158212 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-158212 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-158212 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0-rc.1: (30.280464865s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-158212 -n newest-cni-158212
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-158212 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-158212 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-158212 -n newest-cni-158212
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-158212 -n newest-cni-158212: exit status 2 (348.357821ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-158212 -n newest-cni-158212
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-158212 -n newest-cni-158212: exit status 2 (350.449777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-158212 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-158212 -n newest-cni-158212
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-158212 -n newest-cni-158212
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (51.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-473827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0821 11:59:27.081262 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/no-preload-180246/client.crt: no such file or directory
E0821 11:59:27.086565 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/no-preload-180246/client.crt: no such file or directory
E0821 11:59:27.096790 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/no-preload-180246/client.crt: no such file or directory
E0821 11:59:27.117141 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/no-preload-180246/client.crt: no such file or directory
E0821 11:59:27.157409 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/no-preload-180246/client.crt: no such file or directory
E0821 11:59:27.237622 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/no-preload-180246/client.crt: no such file or directory
E0821 11:59:27.398428 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/no-preload-180246/client.crt: no such file or directory
E0821 11:59:27.719357 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/no-preload-180246/client.crt: no such file or directory
E0821 11:59:28.360018 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/no-preload-180246/client.crt: no such file or directory
E0821 11:59:29.640229 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/no-preload-180246/client.crt: no such file or directory
E0821 11:59:32.200432 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/no-preload-180246/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-473827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (51.677834842s)
--- PASS: TestNetworkPlugins/group/auto/Start (51.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-473827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-473827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-fqd7f" [c30f7232-4c2d-4df1-9327-0436fcdb2deb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0821 11:59:37.320736 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/no-preload-180246/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-fqd7f" [c30f7232-4c2d-4df1-9327-0436fcdb2deb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.014970523s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-473827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-473827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-473827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-473827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0821 12:00:10.882503 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 12:00:27.830851 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 12:00:31.842576 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
E0821 12:00:49.001997 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/no-preload-180246/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-473827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m19.93684757s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-5bkb2" [96867b74-2f76-4f20-88e6-1041b1f871a5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.038825424s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-473827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-473827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-c89kq" [61c64e6e-af89-41f1-8ca1-1a9e076a4627] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-c89kq" [61c64e6e-af89-41f1-8ca1-1a9e076a4627] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.019439852s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-473827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-473827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-473827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-473827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0821 12:02:08.157287 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/old-k8s-version-760796/client.crt: no such file or directory
E0821 12:02:10.923066 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/no-preload-180246/client.crt: no such file or directory
E0821 12:02:39.857642 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-473827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m8.897667878s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-zrqzt" [213e100d-70fe-4417-a388-7aaf7cbd77a5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.047830519s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-473827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-473827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-jffqh" [28fedb97-c50b-475b-bdda-6eda9c1e55ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-jffqh" [28fedb97-c50b-475b-bdda-6eda9c1e55ce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.020712177s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-473827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-473827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-473827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-df5lj" [591c2b5a-f8ce-4c97-9b19-f6ba5cb5d5b2] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-df5lj" [591c2b5a-f8ce-4c97-9b19-f6ba5cb5d5b2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.073550815s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (12.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-df5lj" [591c2b5a-f8ce-4c97-9b19-f6ba5cb5d5b2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014836392s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-660717 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (76.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-473827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-473827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m16.473879247s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (76.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-660717 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-660717 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-660717 --alsologtostderr -v=1: (1.132426153s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-660717 -n default-k8s-diff-port-660717
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-660717 -n default-k8s-diff-port-660717: exit status 2 (440.981531ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-660717 -n default-k8s-diff-port-660717
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-660717 -n default-k8s-diff-port-660717: exit status 2 (563.54121ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-660717 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-660717 -n default-k8s-diff-port-660717
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-660717 -n default-k8s-diff-port-660717
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (91.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-473827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0821 12:04:27.081131 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/no-preload-180246/client.crt: no such file or directory
E0821 12:04:36.366577 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/auto-473827/client.crt: no such file or directory
E0821 12:04:36.371777 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/auto-473827/client.crt: no such file or directory
E0821 12:04:36.381997 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/auto-473827/client.crt: no such file or directory
E0821 12:04:36.402209 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/auto-473827/client.crt: no such file or directory
E0821 12:04:36.442437 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/auto-473827/client.crt: no such file or directory
E0821 12:04:36.522688 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/auto-473827/client.crt: no such file or directory
E0821 12:04:36.683001 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/auto-473827/client.crt: no such file or directory
E0821 12:04:37.003515 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/auto-473827/client.crt: no such file or directory
E0821 12:04:37.644550 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/auto-473827/client.crt: no such file or directory
E0821 12:04:38.925463 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/auto-473827/client.crt: no such file or directory
E0821 12:04:41.485655 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/auto-473827/client.crt: no such file or directory
E0821 12:04:46.606602 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/auto-473827/client.crt: no such file or directory
E0821 12:04:54.763744 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/no-preload-180246/client.crt: no such file or directory
E0821 12:04:56.847378 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/auto-473827/client.crt: no such file or directory
E0821 12:05:17.328197 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/auto-473827/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-473827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m31.888998001s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (91.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-473827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-473827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-7lrkh" [d4692a51-75b2-4ffa-ab5c-66654aa053e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-7lrkh" [d4692a51-75b2-4ffa-ab5c-66654aa053e2] Running
E0821 12:05:27.831249 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
E0821 12:05:31.842463 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/ingress-addon-legacy-354854/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.011092597s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-473827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-473827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-473827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-473827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-473827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-nw27g" [f9313571-d52f-4c18-8074-842eb0f5301f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-nw27g" [f9313571-d52f-4c18-8074-842eb0f5301f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.01563346s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-473827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0821 12:05:58.289223 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/auto-473827/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-473827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m11.706417547s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-473827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-473827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-473827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (52.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-473827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0821 12:06:28.480521 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/kindnet-473827/client.crt: no such file or directory
E0821 12:06:28.485989 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/kindnet-473827/client.crt: no such file or directory
E0821 12:06:28.496217 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/kindnet-473827/client.crt: no such file or directory
E0821 12:06:28.516452 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/kindnet-473827/client.crt: no such file or directory
E0821 12:06:28.556692 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/kindnet-473827/client.crt: no such file or directory
E0821 12:06:28.636959 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/kindnet-473827/client.crt: no such file or directory
E0821 12:06:28.797327 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/kindnet-473827/client.crt: no such file or directory
E0821 12:06:29.117939 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/kindnet-473827/client.crt: no such file or directory
E0821 12:06:29.758326 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/kindnet-473827/client.crt: no such file or directory
E0821 12:06:31.038972 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/kindnet-473827/client.crt: no such file or directory
E0821 12:06:33.600043 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/kindnet-473827/client.crt: no such file or directory
E0821 12:06:38.720935 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/kindnet-473827/client.crt: no such file or directory
E0821 12:06:48.961142 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/kindnet-473827/client.crt: no such file or directory
E0821 12:07:08.157010 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/old-k8s-version-760796/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-473827 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (52.703029362s)
--- PASS: TestNetworkPlugins/group/bridge/Start (52.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-5dc9v" [e3d05212-70f4-47e3-8546-b5350d634e97] Running
E0821 12:07:09.441314 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/kindnet-473827/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.045451055s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-473827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-473827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-mnlqs" [4d1dd3de-635d-482a-87e3-9ce53c1ab7c5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-mnlqs" [4d1dd3de-635d-482a-87e3-9ce53c1ab7c5] Running
E0821 12:07:20.210337 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/auto-473827/client.crt: no such file or directory
E0821 12:07:22.904178 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/functional-723696/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.029470866s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-473827 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-473827 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-f4dpv" [14215ba7-d117-4032-876c-17bc06f7d7d3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-f4dpv" [14215ba7-d117-4032-876c-17bc06f7d7d3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.010126386s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-473827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-473827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-473827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-473827 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-473827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-473827 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    

Test skip (32/310)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.4/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.27.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0-rc.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.61s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-910339 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-910339" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-910339
--- SKIP: TestDownloadOnlyKic (0.61s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-972900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-972900
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
E0821 11:43:30.881972 2739930 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/addons-664125/client.crt: no such file or directory
panic.go:522: 
----------------------- debugLogs start: kubenet-473827 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-473827

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-473827

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-473827

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-473827

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-473827

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-473827

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-473827

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-473827

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-473827

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-473827

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-473827

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-473827" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-473827" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 21 Aug 2023 11:43:28 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-736825
contexts:
- context:
cluster: pause-736825
extensions:
- extension:
last-update: Mon, 21 Aug 2023 11:43:28 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-736825
name: pause-736825
current-context: pause-736825
kind: Config
preferences: {}
users:
- name: pause-736825
user:
client-certificate: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/pause-736825/client.crt
client-key: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/pause-736825/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-473827

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-473827"

                                                
                                                
----------------------- debugLogs end: kubenet-473827 [took: 3.676134953s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-473827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-473827
--- SKIP: TestNetworkPlugins/group/kubenet (3.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-473827 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-473827

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-473827

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-473827

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-473827

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-473827

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-473827

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-473827

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-473827

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-473827

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-473827

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-473827

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-473827" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-473827

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-473827

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-473827

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-473827

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-473827" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-473827" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17102-2734539/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 21 Aug 2023 11:43:28 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-736825
contexts:
- context:
cluster: pause-736825
extensions:
- extension:
last-update: Mon, 21 Aug 2023 11:43:28 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-736825
name: pause-736825
current-context: pause-736825
kind: Config
preferences: {}
users:
- name: pause-736825
user:
client-certificate: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/pause-736825/client.crt
client-key: /home/jenkins/minikube-integration/17102-2734539/.minikube/profiles/pause-736825/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-473827

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-473827" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-473827"

                                                
                                                
----------------------- debugLogs end: cilium-473827 [took: 3.724515079s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-473827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-473827
--- SKIP: TestNetworkPlugins/group/cilium (3.89s)

                                                
                                    
Copied to clipboard