Test Report: Docker_Linux_crio_arm64 17375

                    
                      48ead6827c858d28720e0f0a5b94c9bf64850269:2023-10-10:31379
                    
                

Test fail (7/308)

Order failed test Duration
28 TestAddons/parallel/Ingress 169.4
159 TestIngressAddonLegacy/serial/ValidateIngressAddons 177.35
209 TestMultiNode/serial/PingHostFrom2Pods 4.17
230 TestRunningBinaryUpgrade 78.79
233 TestMissingContainerUpgrade 180.04
245 TestStoppedBinaryUpgrade/Upgrade 409.49
255 TestStoppedBinaryUpgrade/MinikubeLogs 0.16
x
+
TestAddons/parallel/Ingress (169.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-749116 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-749116 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-749116 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [880b7678-4b27-478e-bde2-95f2d1a3fa52] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [880b7678-4b27-478e-bde2-95f2d1a3fa52] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.016061403s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p addons-749116 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-749116 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.091847248s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-749116 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p addons-749116 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.050978874s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p addons-749116 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p addons-749116 addons disable ingress-dns --alsologtostderr -v=1: (1.187636843s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p addons-749116 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p addons-749116 addons disable ingress --alsologtostderr -v=1: (7.870233537s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-749116
helpers_test.go:235: (dbg) docker inspect addons-749116:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "39be1bc99cb498907ce9c54e2fd8ff82fabe50cacf47f2a99774da4327c487c8",
	        "Created": "2023-10-09T22:55:31.554482208Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1544190,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-09T22:55:31.892792467Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7c31788aee97084e64d3a410721295a10fc01c1f34b468c1bc9be09686708026",
	        "ResolvConfPath": "/var/lib/docker/containers/39be1bc99cb498907ce9c54e2fd8ff82fabe50cacf47f2a99774da4327c487c8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/39be1bc99cb498907ce9c54e2fd8ff82fabe50cacf47f2a99774da4327c487c8/hostname",
	        "HostsPath": "/var/lib/docker/containers/39be1bc99cb498907ce9c54e2fd8ff82fabe50cacf47f2a99774da4327c487c8/hosts",
	        "LogPath": "/var/lib/docker/containers/39be1bc99cb498907ce9c54e2fd8ff82fabe50cacf47f2a99774da4327c487c8/39be1bc99cb498907ce9c54e2fd8ff82fabe50cacf47f2a99774da4327c487c8-json.log",
	        "Name": "/addons-749116",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-749116:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-749116",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/857148e5027e74b5bfd9d3e61f7d1f7c7461eb295be6bbd42b36fd1b797da66d-init/diff:/var/lib/docker/overlay2/ef9093ba51e6eb88ff4b48fff9bf153334448175aa68f58581a9571eed9ca4f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/857148e5027e74b5bfd9d3e61f7d1f7c7461eb295be6bbd42b36fd1b797da66d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/857148e5027e74b5bfd9d3e61f7d1f7c7461eb295be6bbd42b36fd1b797da66d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/857148e5027e74b5bfd9d3e61f7d1f7c7461eb295be6bbd42b36fd1b797da66d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-749116",
	                "Source": "/var/lib/docker/volumes/addons-749116/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-749116",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-749116",
	                "name.minikube.sigs.k8s.io": "addons-749116",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "37e783a490c7dcae68d63b1f7fe7e4e9e08fd6710da2224bf9aa769b24b82902",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34359"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34358"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34355"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34357"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34356"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/37e783a490c7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-749116": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "39be1bc99cb4",
	                        "addons-749116"
	                    ],
	                    "NetworkID": "1cab4678cb4d46e723b31e4db65fa4cbe3a36789b5f48dba5dcdd0bf8239be49",
	                    "EndpointID": "aa6f2c41f16b9befb2b7076ab9b7bb9cba6a3130f64aa7b7883762aeeaf688d2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-749116 -n addons-749116
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-749116 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-749116 logs -n 25: (1.576124602s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.31.2 | 09 Oct 23 22:55 UTC | 09 Oct 23 22:55 UTC |
	| delete  | -p download-only-132234                                                                     | download-only-132234   | jenkins | v1.31.2 | 09 Oct 23 22:55 UTC | 09 Oct 23 22:55 UTC |
	| delete  | -p download-only-132234                                                                     | download-only-132234   | jenkins | v1.31.2 | 09 Oct 23 22:55 UTC | 09 Oct 23 22:55 UTC |
	| start   | --download-only -p                                                                          | download-docker-066198 | jenkins | v1.31.2 | 09 Oct 23 22:55 UTC |                     |
	|         | download-docker-066198                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-066198                                                                   | download-docker-066198 | jenkins | v1.31.2 | 09 Oct 23 22:55 UTC | 09 Oct 23 22:55 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-063694   | jenkins | v1.31.2 | 09 Oct 23 22:55 UTC |                     |
	|         | binary-mirror-063694                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45427                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-063694                                                                     | binary-mirror-063694   | jenkins | v1.31.2 | 09 Oct 23 22:55 UTC | 09 Oct 23 22:55 UTC |
	| addons  | disable dashboard -p                                                                        | addons-749116          | jenkins | v1.31.2 | 09 Oct 23 22:55 UTC |                     |
	|         | addons-749116                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-749116          | jenkins | v1.31.2 | 09 Oct 23 22:55 UTC |                     |
	|         | addons-749116                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-749116 --wait=true                                                                | addons-749116          | jenkins | v1.31.2 | 09 Oct 23 22:55 UTC | 09 Oct 23 22:57 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| ip      | addons-749116 ip                                                                            | addons-749116          | jenkins | v1.31.2 | 09 Oct 23 22:58 UTC | 09 Oct 23 22:58 UTC |
	| addons  | addons-749116 addons disable                                                                | addons-749116          | jenkins | v1.31.2 | 09 Oct 23 22:58 UTC | 09 Oct 23 22:58 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-749116          | jenkins | v1.31.2 | 09 Oct 23 22:58 UTC | 09 Oct 23 22:58 UTC |
	|         | -p addons-749116                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-749116 ssh cat                                                                       | addons-749116          | jenkins | v1.31.2 | 09 Oct 23 22:58 UTC | 09 Oct 23 22:58 UTC |
	|         | /opt/local-path-provisioner/pvc-590f73da-98a3-4a3a-b26d-19a0557f0a9c_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-749116 addons disable                                                                | addons-749116          | jenkins | v1.31.2 | 09 Oct 23 22:58 UTC | 09 Oct 23 22:59 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-749116 addons                                                                        | addons-749116          | jenkins | v1.31.2 | 09 Oct 23 22:58 UTC | 09 Oct 23 22:58 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-749116 addons                                                                        | addons-749116          | jenkins | v1.31.2 | 09 Oct 23 22:58 UTC | 09 Oct 23 22:58 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-749116          | jenkins | v1.31.2 | 09 Oct 23 22:58 UTC | 09 Oct 23 22:58 UTC |
	|         | addons-749116                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-749116          | jenkins | v1.31.2 | 09 Oct 23 22:58 UTC | 09 Oct 23 22:58 UTC |
	|         | -p addons-749116                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-749116 addons                                                                        | addons-749116          | jenkins | v1.31.2 | 09 Oct 23 22:59 UTC | 09 Oct 23 22:59 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-749116          | jenkins | v1.31.2 | 09 Oct 23 22:59 UTC | 09 Oct 23 22:59 UTC |
	|         | addons-749116                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-749116 ssh curl -s                                                                   | addons-749116          | jenkins | v1.31.2 | 09 Oct 23 22:59 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-749116 ip                                                                            | addons-749116          | jenkins | v1.31.2 | 09 Oct 23 23:01 UTC | 09 Oct 23 23:01 UTC |
	| addons  | addons-749116 addons disable                                                                | addons-749116          | jenkins | v1.31.2 | 09 Oct 23 23:01 UTC | 09 Oct 23 23:01 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-749116 addons disable                                                                | addons-749116          | jenkins | v1.31.2 | 09 Oct 23 23:01 UTC | 09 Oct 23 23:01 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/09 22:55:08
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 22:55:08.457230 1543722 out.go:296] Setting OutFile to fd 1 ...
	I1009 22:55:08.458774 1543722 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 22:55:08.458790 1543722 out.go:309] Setting ErrFile to fd 2...
	I1009 22:55:08.458797 1543722 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 22:55:08.459133 1543722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-1537865/.minikube/bin
	I1009 22:55:08.459602 1543722 out.go:303] Setting JSON to false
	I1009 22:55:08.460417 1543722 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23852,"bootTime":1696868257,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1009 22:55:08.460493 1543722 start.go:138] virtualization:  
	I1009 22:55:08.463354 1543722 out.go:177] * [addons-749116] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1009 22:55:08.466150 1543722 out.go:177]   - MINIKUBE_LOCATION=17375
	I1009 22:55:08.468189 1543722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 22:55:08.466322 1543722 notify.go:220] Checking for updates...
	I1009 22:55:08.471828 1543722 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig
	I1009 22:55:08.473901 1543722 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	I1009 22:55:08.475767 1543722 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 22:55:08.477663 1543722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 22:55:08.479832 1543722 driver.go:378] Setting default libvirt URI to qemu:///system
	I1009 22:55:08.507233 1543722 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1009 22:55:08.507331 1543722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 22:55:08.593138 1543722 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-10-09 22:55:08.583314283 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1009 22:55:08.593256 1543722 docker.go:295] overlay module found
	I1009 22:55:08.596812 1543722 out.go:177] * Using the docker driver based on user configuration
	I1009 22:55:08.598797 1543722 start.go:298] selected driver: docker
	I1009 22:55:08.598815 1543722 start.go:902] validating driver "docker" against <nil>
	I1009 22:55:08.598829 1543722 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 22:55:08.599541 1543722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 22:55:08.664093 1543722 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-10-09 22:55:08.65363017 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1009 22:55:08.664258 1543722 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1009 22:55:08.664484 1543722 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 22:55:08.666860 1543722 out.go:177] * Using Docker driver with root privileges
	I1009 22:55:08.668912 1543722 cni.go:84] Creating CNI manager for ""
	I1009 22:55:08.668934 1543722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 22:55:08.668945 1543722 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 22:55:08.668963 1543722 start_flags.go:323] config:
	{Name:addons-749116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-749116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 22:55:08.671579 1543722 out.go:177] * Starting control plane node addons-749116 in cluster addons-749116
	I1009 22:55:08.673602 1543722 cache.go:122] Beginning downloading kic base image for docker with crio
	I1009 22:55:08.675785 1543722 out.go:177] * Pulling base image ...
	I1009 22:55:08.677618 1543722 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1009 22:55:08.677695 1543722 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1009 22:55:08.677706 1543722 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1009 22:55:08.677710 1543722 cache.go:57] Caching tarball of preloaded images
	I1009 22:55:08.677914 1543722 preload.go:174] Found /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 22:55:08.677925 1543722 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1009 22:55:08.678315 1543722 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/config.json ...
	I1009 22:55:08.678337 1543722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/config.json: {Name:mkcd071ffa1fdc65bd21f6cf6b01c81f8a418f5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 22:55:08.694433 1543722 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1009 22:55:08.694550 1543722 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1009 22:55:08.694574 1543722 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory, skipping pull
	I1009 22:55:08.694579 1543722 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in cache, skipping pull
	I1009 22:55:08.694591 1543722 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae as a tarball
	I1009 22:55:08.694597 1543722 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae from local cache
	I1009 22:55:24.615433 1543722 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae from cached tarball
	I1009 22:55:24.615472 1543722 cache.go:195] Successfully downloaded all kic artifacts
	I1009 22:55:24.615527 1543722 start.go:365] acquiring machines lock for addons-749116: {Name:mk3c2a13bef0f3605dcd636a33cde9b423f79964 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 22:55:24.616018 1543722 start.go:369] acquired machines lock for "addons-749116" in 465.979µs
	I1009 22:55:24.616054 1543722 start.go:93] Provisioning new machine with config: &{Name:addons-749116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-749116 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 22:55:24.616145 1543722 start.go:125] createHost starting for "" (driver="docker")
	I1009 22:55:24.619113 1543722 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1009 22:55:24.619411 1543722 start.go:159] libmachine.API.Create for "addons-749116" (driver="docker")
	I1009 22:55:24.619445 1543722 client.go:168] LocalClient.Create starting
	I1009 22:55:24.619584 1543722 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem
	I1009 22:55:24.801298 1543722 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem
	I1009 22:55:25.196270 1543722 cli_runner.go:164] Run: docker network inspect addons-749116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 22:55:25.221558 1543722 cli_runner.go:211] docker network inspect addons-749116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 22:55:25.221644 1543722 network_create.go:281] running [docker network inspect addons-749116] to gather additional debugging logs...
	I1009 22:55:25.221661 1543722 cli_runner.go:164] Run: docker network inspect addons-749116
	W1009 22:55:25.239758 1543722 cli_runner.go:211] docker network inspect addons-749116 returned with exit code 1
	I1009 22:55:25.239789 1543722 network_create.go:284] error running [docker network inspect addons-749116]: docker network inspect addons-749116: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-749116 not found
	I1009 22:55:25.239802 1543722 network_create.go:286] output of [docker network inspect addons-749116]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-749116 not found
	
	** /stderr **
	I1009 22:55:25.239902 1543722 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 22:55:25.257804 1543722 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40028ce400}
	I1009 22:55:25.257848 1543722 network_create.go:124] attempt to create docker network addons-749116 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 22:55:25.257905 1543722 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-749116 addons-749116
	I1009 22:55:25.338140 1543722 network_create.go:108] docker network addons-749116 192.168.49.0/24 created
	I1009 22:55:25.338165 1543722 kic.go:118] calculated static IP "192.168.49.2" for the "addons-749116" container
	I1009 22:55:25.338254 1543722 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 22:55:25.355616 1543722 cli_runner.go:164] Run: docker volume create addons-749116 --label name.minikube.sigs.k8s.io=addons-749116 --label created_by.minikube.sigs.k8s.io=true
	I1009 22:55:25.374546 1543722 oci.go:103] Successfully created a docker volume addons-749116
	I1009 22:55:25.374643 1543722 cli_runner.go:164] Run: docker run --rm --name addons-749116-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-749116 --entrypoint /usr/bin/test -v addons-749116:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1009 22:55:27.281868 1543722 cli_runner.go:217] Completed: docker run --rm --name addons-749116-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-749116 --entrypoint /usr/bin/test -v addons-749116:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib: (1.907176353s)
	I1009 22:55:27.281908 1543722 oci.go:107] Successfully prepared a docker volume addons-749116
	I1009 22:55:27.281939 1543722 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1009 22:55:27.281957 1543722 kic.go:191] Starting extracting preloaded images to volume ...
	I1009 22:55:27.282030 1543722 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-749116:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 22:55:31.467544 1543722 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-749116:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.185470389s)
	I1009 22:55:31.467576 1543722 kic.go:200] duration metric: took 4.185614 seconds to extract preloaded images to volume
	W1009 22:55:31.467745 1543722 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 22:55:31.467858 1543722 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 22:55:31.536401 1543722 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-749116 --name addons-749116 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-749116 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-749116 --network addons-749116 --ip 192.168.49.2 --volume addons-749116:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1009 22:55:31.901621 1543722 cli_runner.go:164] Run: docker container inspect addons-749116 --format={{.State.Running}}
	I1009 22:55:31.928700 1543722 cli_runner.go:164] Run: docker container inspect addons-749116 --format={{.State.Status}}
	I1009 22:55:31.953746 1543722 cli_runner.go:164] Run: docker exec addons-749116 stat /var/lib/dpkg/alternatives/iptables
	I1009 22:55:32.025293 1543722 oci.go:144] the created container "addons-749116" has a running status.
	I1009 22:55:32.025322 1543722 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa...
	I1009 22:55:33.069372 1543722 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 22:55:33.103275 1543722 cli_runner.go:164] Run: docker container inspect addons-749116 --format={{.State.Status}}
	I1009 22:55:33.154509 1543722 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 22:55:33.154535 1543722 kic_runner.go:114] Args: [docker exec --privileged addons-749116 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 22:55:33.249675 1543722 cli_runner.go:164] Run: docker container inspect addons-749116 --format={{.State.Status}}
	I1009 22:55:33.273301 1543722 machine.go:88] provisioning docker machine ...
	I1009 22:55:33.273330 1543722 ubuntu.go:169] provisioning hostname "addons-749116"
	I1009 22:55:33.273412 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:55:33.292267 1543722 main.go:141] libmachine: Using SSH client type: native
	I1009 22:55:33.292766 1543722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34359 <nil> <nil>}
	I1009 22:55:33.292784 1543722 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-749116 && echo "addons-749116" | sudo tee /etc/hostname
	I1009 22:55:33.444632 1543722 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-749116
	
	I1009 22:55:33.444755 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:55:33.465800 1543722 main.go:141] libmachine: Using SSH client type: native
	I1009 22:55:33.466208 1543722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34359 <nil> <nil>}
	I1009 22:55:33.466232 1543722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-749116' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-749116/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-749116' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 22:55:33.600750 1543722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 22:55:33.600778 1543722 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17375-1537865/.minikube CaCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17375-1537865/.minikube}
	I1009 22:55:33.600809 1543722 ubuntu.go:177] setting up certificates
	I1009 22:55:33.600818 1543722 provision.go:83] configureAuth start
	I1009 22:55:33.600880 1543722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-749116
	I1009 22:55:33.619277 1543722 provision.go:138] copyHostCerts
	I1009 22:55:33.619363 1543722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem (1078 bytes)
	I1009 22:55:33.619501 1543722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem (1123 bytes)
	I1009 22:55:33.619571 1543722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem (1679 bytes)
	I1009 22:55:33.619682 1543722 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem org=jenkins.addons-749116 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-749116]
	I1009 22:55:33.957880 1543722 provision.go:172] copyRemoteCerts
	I1009 22:55:33.957955 1543722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 22:55:33.958005 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:55:33.976146 1543722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa Username:docker}
	I1009 22:55:34.078278 1543722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1009 22:55:34.107284 1543722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 22:55:34.139718 1543722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 22:55:34.169221 1543722 provision.go:86] duration metric: configureAuth took 568.366469ms
	I1009 22:55:34.169293 1543722 ubuntu.go:193] setting minikube options for container-runtime
	I1009 22:55:34.169502 1543722 config.go:182] Loaded profile config "addons-749116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1009 22:55:34.169612 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:55:34.187387 1543722 main.go:141] libmachine: Using SSH client type: native
	I1009 22:55:34.187831 1543722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34359 <nil> <nil>}
	I1009 22:55:34.187855 1543722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 22:55:34.435479 1543722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 22:55:34.435555 1543722 machine.go:91] provisioned docker machine in 1.162234326s
	I1009 22:55:34.435578 1543722 client.go:171] LocalClient.Create took 9.816123885s
	I1009 22:55:34.435606 1543722 start.go:167] duration metric: libmachine.API.Create for "addons-749116" took 9.816195712s
	I1009 22:55:34.435641 1543722 start.go:300] post-start starting for "addons-749116" (driver="docker")
	I1009 22:55:34.435668 1543722 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 22:55:34.435776 1543722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 22:55:34.435852 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:55:34.454351 1543722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa Username:docker}
	I1009 22:55:34.549939 1543722 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 22:55:34.554102 1543722 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 22:55:34.554139 1543722 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1009 22:55:34.554152 1543722 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1009 22:55:34.554159 1543722 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1009 22:55:34.554168 1543722 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-1537865/.minikube/addons for local assets ...
	I1009 22:55:34.554230 1543722 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-1537865/.minikube/files for local assets ...
	I1009 22:55:34.554266 1543722 start.go:303] post-start completed in 118.599058ms
	I1009 22:55:34.554567 1543722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-749116
	I1009 22:55:34.572276 1543722 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/config.json ...
	I1009 22:55:34.572577 1543722 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 22:55:34.572622 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:55:34.590301 1543722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa Username:docker}
	I1009 22:55:34.681280 1543722 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 22:55:34.687081 1543722 start.go:128] duration metric: createHost completed in 10.070919758s
	I1009 22:55:34.687142 1543722 start.go:83] releasing machines lock for "addons-749116", held for 10.071070864s
	I1009 22:55:34.687219 1543722 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-749116
	I1009 22:55:34.704292 1543722 ssh_runner.go:195] Run: cat /version.json
	I1009 22:55:34.704344 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:55:34.704455 1543722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 22:55:34.704534 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:55:34.723422 1543722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa Username:docker}
	I1009 22:55:34.742844 1543722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa Username:docker}
	I1009 22:55:34.952343 1543722 ssh_runner.go:195] Run: systemctl --version
	I1009 22:55:34.958103 1543722 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 22:55:35.123673 1543722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 22:55:35.129609 1543722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 22:55:35.158240 1543722 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1009 22:55:35.158336 1543722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 22:55:35.206400 1543722 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1009 22:55:35.206424 1543722 start.go:472] detecting cgroup driver to use...
	I1009 22:55:35.206456 1543722 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1009 22:55:35.206505 1543722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 22:55:35.223823 1543722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 22:55:35.237917 1543722 docker.go:198] disabling cri-docker service (if available) ...
	I1009 22:55:35.237979 1543722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 22:55:35.254971 1543722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 22:55:35.272140 1543722 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 22:55:35.367024 1543722 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 22:55:35.479159 1543722 docker.go:214] disabling docker service ...
	I1009 22:55:35.479253 1543722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 22:55:35.502251 1543722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 22:55:35.516654 1543722 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 22:55:35.617891 1543722 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 22:55:35.726610 1543722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 22:55:35.740451 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 22:55:35.760981 1543722 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1009 22:55:35.761086 1543722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 22:55:35.773817 1543722 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 22:55:35.773907 1543722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 22:55:35.786731 1543722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 22:55:35.799157 1543722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 22:55:35.811908 1543722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 22:55:35.823424 1543722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 22:55:35.834134 1543722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 22:55:35.844498 1543722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 22:55:35.930988 1543722 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 22:55:36.095983 1543722 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 22:55:36.096117 1543722 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 22:55:36.102174 1543722 start.go:540] Will wait 60s for crictl version
	I1009 22:55:36.102305 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:55:36.107745 1543722 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 22:55:36.154737 1543722 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1009 22:55:36.154897 1543722 ssh_runner.go:195] Run: crio --version
	I1009 22:55:36.205166 1543722 ssh_runner.go:195] Run: crio --version
	I1009 22:55:36.252293 1543722 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1009 22:55:36.254473 1543722 cli_runner.go:164] Run: docker network inspect addons-749116 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 22:55:36.271647 1543722 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 22:55:36.276243 1543722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 22:55:36.289497 1543722 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1009 22:55:36.289569 1543722 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 22:55:36.352236 1543722 crio.go:496] all images are preloaded for cri-o runtime.
	I1009 22:55:36.352278 1543722 crio.go:415] Images already preloaded, skipping extraction
	I1009 22:55:36.352332 1543722 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 22:55:36.395390 1543722 crio.go:496] all images are preloaded for cri-o runtime.
	I1009 22:55:36.395417 1543722 cache_images.go:84] Images are preloaded, skipping loading
	I1009 22:55:36.395534 1543722 ssh_runner.go:195] Run: crio config
	I1009 22:55:36.451792 1543722 cni.go:84] Creating CNI manager for ""
	I1009 22:55:36.451816 1543722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 22:55:36.451846 1543722 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1009 22:55:36.451870 1543722 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-749116 NodeName:addons-749116 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 22:55:36.452006 1543722 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-749116"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 22:55:36.452079 1543722 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-749116 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-749116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1009 22:55:36.452147 1543722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1009 22:55:36.462742 1543722 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 22:55:36.462816 1543722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 22:55:36.472993 1543722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1009 22:55:36.494083 1543722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 22:55:36.515814 1543722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1009 22:55:36.537553 1543722 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 22:55:36.542075 1543722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 22:55:36.555611 1543722 certs.go:56] Setting up /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116 for IP: 192.168.49.2
	I1009 22:55:36.555643 1543722 certs.go:190] acquiring lock for shared ca certs: {Name:mk430c21a56d31b4f15423923c65864a3e3a3c9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 22:55:36.555805 1543722 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.key
	I1009 22:55:36.967144 1543722 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt ...
	I1009 22:55:36.967176 1543722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt: {Name:mk0d7415159a0249932c541cf60319615e915995 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 22:55:36.967372 1543722 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.key ...
	I1009 22:55:36.967385 1543722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.key: {Name:mk7c8c4e0e94beeeb8270c9034d15ee454bd4085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 22:55:36.967480 1543722 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.key
	I1009 22:55:37.094011 1543722 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.crt ...
	I1009 22:55:37.094040 1543722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.crt: {Name:mka1de8fd5a4e258d7a16a41e3e476a39c15716e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 22:55:37.094222 1543722 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.key ...
	I1009 22:55:37.094235 1543722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.key: {Name:mk76f4fc0c1f94b8ca0b77228f3a43211b3e509d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 22:55:37.094364 1543722 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.key
	I1009 22:55:37.094380 1543722 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt with IP's: []
	I1009 22:55:37.353552 1543722 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt ...
	I1009 22:55:37.353585 1543722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: {Name:mk2103de043fd19ab47560c100c1e6f034d42cc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 22:55:37.353805 1543722 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.key ...
	I1009 22:55:37.353820 1543722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.key: {Name:mk45e4f97aa2a4f96630be18baaa717be847ba9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 22:55:37.353922 1543722 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/apiserver.key.dd3b5fb2
	I1009 22:55:37.353941 1543722 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1009 22:55:38.030820 1543722 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/apiserver.crt.dd3b5fb2 ...
	I1009 22:55:38.030857 1543722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/apiserver.crt.dd3b5fb2: {Name:mk5e1cbe63314e5cd5ab8f4ec370d4d10cd83e38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 22:55:38.031058 1543722 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/apiserver.key.dd3b5fb2 ...
	I1009 22:55:38.031074 1543722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/apiserver.key.dd3b5fb2: {Name:mk29cd32598c7215bbe49074e4e523ccb42d6d3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 22:55:38.031182 1543722 certs.go:337] copying /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/apiserver.crt
	I1009 22:55:38.031265 1543722 certs.go:341] copying /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/apiserver.key
	I1009 22:55:38.031312 1543722 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/proxy-client.key
	I1009 22:55:38.031337 1543722 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/proxy-client.crt with IP's: []
	I1009 22:55:39.239627 1543722 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/proxy-client.crt ...
	I1009 22:55:39.239668 1543722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/proxy-client.crt: {Name:mk5df75ed06cefbe3feb64ed563923af5ec13c85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 22:55:39.239913 1543722 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/proxy-client.key ...
	I1009 22:55:39.239931 1543722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/proxy-client.key: {Name:mk8532cddf47d181aebe3d40eb12da9d1cced089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 22:55:39.240167 1543722 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 22:55:39.240223 1543722 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem (1078 bytes)
	I1009 22:55:39.240259 1543722 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem (1123 bytes)
	I1009 22:55:39.240297 1543722 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem (1679 bytes)
	I1009 22:55:39.240941 1543722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1009 22:55:39.272127 1543722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 22:55:39.300989 1543722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 22:55:39.329775 1543722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 22:55:39.358782 1543722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 22:55:39.387827 1543722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 22:55:39.416309 1543722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 22:55:39.445336 1543722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 22:55:39.476137 1543722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 22:55:39.505336 1543722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 22:55:39.526347 1543722 ssh_runner.go:195] Run: openssl version
	I1009 22:55:39.533756 1543722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 22:55:39.545456 1543722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 22:55:39.550253 1543722 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  9 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1009 22:55:39.550339 1543722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 22:55:39.559084 1543722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 22:55:39.570681 1543722 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1009 22:55:39.575009 1543722 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1009 22:55:39.575071 1543722 kubeadm.go:404] StartCluster: {Name:addons-749116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-749116 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 22:55:39.575167 1543722 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 22:55:39.575243 1543722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 22:55:39.617398 1543722 cri.go:89] found id: ""
	I1009 22:55:39.617469 1543722 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 22:55:39.628586 1543722 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 22:55:39.639355 1543722 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1009 22:55:39.639446 1543722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 22:55:39.650138 1543722 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 22:55:39.650180 1543722 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 22:55:39.704361 1543722 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1009 22:55:39.704804 1543722 kubeadm.go:322] [preflight] Running pre-flight checks
	I1009 22:55:39.754037 1543722 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1009 22:55:39.754130 1543722 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-aws
	I1009 22:55:39.754190 1543722 kubeadm.go:322] OS: Linux
	I1009 22:55:39.754263 1543722 kubeadm.go:322] CGROUPS_CPU: enabled
	I1009 22:55:39.754337 1543722 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1009 22:55:39.754396 1543722 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1009 22:55:39.754471 1543722 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1009 22:55:39.754550 1543722 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1009 22:55:39.754617 1543722 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1009 22:55:39.754683 1543722 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1009 22:55:39.754753 1543722 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1009 22:55:39.754824 1543722 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1009 22:55:39.839642 1543722 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 22:55:39.839814 1543722 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 22:55:39.839968 1543722 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 22:55:40.135594 1543722 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 22:55:40.138973 1543722 out.go:204]   - Generating certificates and keys ...
	I1009 22:55:40.139143 1543722 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1009 22:55:40.139226 1543722 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1009 22:55:40.399348 1543722 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 22:55:40.801782 1543722 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1009 22:55:41.335682 1543722 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1009 22:55:41.672682 1543722 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1009 22:55:42.034899 1543722 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1009 22:55:42.035406 1543722 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-749116 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 22:55:42.712222 1543722 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1009 22:55:42.712619 1543722 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-749116 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 22:55:43.176192 1543722 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 22:55:43.844022 1543722 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 22:55:43.954738 1543722 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1009 22:55:43.955097 1543722 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 22:55:44.390599 1543722 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 22:55:44.949627 1543722 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 22:55:45.662433 1543722 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 22:55:46.124786 1543722 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 22:55:46.125723 1543722 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 22:55:46.128463 1543722 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 22:55:46.131334 1543722 out.go:204]   - Booting up control plane ...
	I1009 22:55:46.131472 1543722 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 22:55:46.131547 1543722 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 22:55:46.131609 1543722 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 22:55:46.144786 1543722 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 22:55:46.144878 1543722 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 22:55:46.144916 1543722 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1009 22:55:46.256185 1543722 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 22:55:53.759209 1543722 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503081 seconds
	I1009 22:55:53.759323 1543722 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 22:55:53.774730 1543722 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 22:55:54.299923 1543722 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 22:55:54.300112 1543722 kubeadm.go:322] [mark-control-plane] Marking the node addons-749116 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 22:55:54.813341 1543722 kubeadm.go:322] [bootstrap-token] Using token: 964wre.ok9iy6xv7slv5y3r
	I1009 22:55:54.815389 1543722 out.go:204]   - Configuring RBAC rules ...
	I1009 22:55:54.815516 1543722 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 22:55:54.821086 1543722 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 22:55:54.831479 1543722 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 22:55:54.839159 1543722 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 22:55:54.844276 1543722 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 22:55:54.848824 1543722 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 22:55:54.863438 1543722 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 22:55:55.167386 1543722 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1009 22:55:55.258472 1543722 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1009 22:55:55.259827 1543722 kubeadm.go:322] 
	I1009 22:55:55.259894 1543722 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1009 22:55:55.259900 1543722 kubeadm.go:322] 
	I1009 22:55:55.259972 1543722 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1009 22:55:55.259978 1543722 kubeadm.go:322] 
	I1009 22:55:55.260002 1543722 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1009 22:55:55.260057 1543722 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 22:55:55.260104 1543722 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 22:55:55.260109 1543722 kubeadm.go:322] 
	I1009 22:55:55.260159 1543722 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1009 22:55:55.260164 1543722 kubeadm.go:322] 
	I1009 22:55:55.260208 1543722 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 22:55:55.260213 1543722 kubeadm.go:322] 
	I1009 22:55:55.260261 1543722 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1009 22:55:55.260331 1543722 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 22:55:55.260395 1543722 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 22:55:55.260399 1543722 kubeadm.go:322] 
	I1009 22:55:55.260477 1543722 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 22:55:55.260560 1543722 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1009 22:55:55.260572 1543722 kubeadm.go:322] 
	I1009 22:55:55.260650 1543722 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 964wre.ok9iy6xv7slv5y3r \
	I1009 22:55:55.260746 1543722 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e2aebf53348f507bad0adab8a765b229b70810954e22f1e7a919941009267e3f \
	I1009 22:55:55.260766 1543722 kubeadm.go:322] 	--control-plane 
	I1009 22:55:55.260770 1543722 kubeadm.go:322] 
	I1009 22:55:55.260849 1543722 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1009 22:55:55.260853 1543722 kubeadm.go:322] 
	I1009 22:55:55.260930 1543722 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 964wre.ok9iy6xv7slv5y3r \
	I1009 22:55:55.261032 1543722 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e2aebf53348f507bad0adab8a765b229b70810954e22f1e7a919941009267e3f 
	I1009 22:55:55.265006 1543722 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1009 22:55:55.265128 1543722 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 22:55:55.265143 1543722 cni.go:84] Creating CNI manager for ""
	I1009 22:55:55.265150 1543722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 22:55:55.267737 1543722 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1009 22:55:55.269814 1543722 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 22:55:55.283397 1543722 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1009 22:55:55.283467 1543722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1009 22:55:55.329299 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 22:55:56.242613 1543722 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 22:55:56.242746 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:55:56.242818 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90 minikube.k8s.io/name=addons-749116 minikube.k8s.io/updated_at=2023_10_09T22_55_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:55:56.465547 1543722 ops.go:34] apiserver oom_adj: -16
	I1009 22:55:56.465651 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:55:56.599192 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:55:57.199272 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:55:57.699382 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:55:58.199431 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:55:58.699353 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:55:59.199678 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:55:59.699685 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:56:00.199016 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:56:00.698779 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:56:01.198992 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:56:01.699293 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:56:02.198954 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:56:02.699663 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:56:03.199403 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:56:03.699692 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:56:04.199335 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:56:04.698831 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:56:05.198862 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:56:05.698780 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:56:06.199265 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:56:06.699617 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:56:07.199734 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:56:07.699289 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:56:08.199270 1543722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 22:56:08.308805 1543722 kubeadm.go:1081] duration metric: took 12.066100739s to wait for elevateKubeSystemPrivileges.
	I1009 22:56:08.308915 1543722 kubeadm.go:406] StartCluster complete in 28.733863325s
	I1009 22:56:08.308951 1543722 settings.go:142] acquiring lock: {Name:mkeeac28244e9503bae3d91ba3a5c4a3392545f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 22:56:08.309114 1543722 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17375-1537865/kubeconfig
	I1009 22:56:08.309551 1543722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/kubeconfig: {Name:mk913f33f2148d9a5b250c16fc9df0a8782f9275 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 22:56:08.309782 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 22:56:08.310091 1543722 config.go:182] Loaded profile config "addons-749116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1009 22:56:08.310237 1543722 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1009 22:56:08.310310 1543722 addons.go:69] Setting volumesnapshots=true in profile "addons-749116"
	I1009 22:56:08.310323 1543722 addons.go:231] Setting addon volumesnapshots=true in "addons-749116"
	I1009 22:56:08.310354 1543722 host.go:66] Checking if "addons-749116" exists ...
	I1009 22:56:08.310816 1543722 cli_runner.go:164] Run: docker container inspect addons-749116 --format={{.State.Status}}
	I1009 22:56:08.311259 1543722 addons.go:69] Setting cloud-spanner=true in profile "addons-749116"
	I1009 22:56:08.311276 1543722 addons.go:231] Setting addon cloud-spanner=true in "addons-749116"
	I1009 22:56:08.311306 1543722 host.go:66] Checking if "addons-749116" exists ...
	I1009 22:56:08.311681 1543722 cli_runner.go:164] Run: docker container inspect addons-749116 --format={{.State.Status}}
	I1009 22:56:08.314303 1543722 addons.go:69] Setting metrics-server=true in profile "addons-749116"
	I1009 22:56:08.314377 1543722 addons.go:231] Setting addon metrics-server=true in "addons-749116"
	I1009 22:56:08.314424 1543722 host.go:66] Checking if "addons-749116" exists ...
	I1009 22:56:08.314934 1543722 cli_runner.go:164] Run: docker container inspect addons-749116 --format={{.State.Status}}
	I1009 22:56:08.315346 1543722 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-749116"
	I1009 22:56:08.315364 1543722 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-749116"
	I1009 22:56:08.315395 1543722 host.go:66] Checking if "addons-749116" exists ...
	I1009 22:56:08.315775 1543722 cli_runner.go:164] Run: docker container inspect addons-749116 --format={{.State.Status}}
	I1009 22:56:08.316998 1543722 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-749116"
	I1009 22:56:08.317056 1543722 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-749116"
	I1009 22:56:08.317089 1543722 host.go:66] Checking if "addons-749116" exists ...
	I1009 22:56:08.317578 1543722 cli_runner.go:164] Run: docker container inspect addons-749116 --format={{.State.Status}}
	I1009 22:56:08.322574 1543722 addons.go:69] Setting default-storageclass=true in profile "addons-749116"
	I1009 22:56:08.322692 1543722 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-749116"
	I1009 22:56:08.323066 1543722 cli_runner.go:164] Run: docker container inspect addons-749116 --format={{.State.Status}}
	I1009 22:56:08.333177 1543722 addons.go:69] Setting registry=true in profile "addons-749116"
	I1009 22:56:08.333263 1543722 addons.go:231] Setting addon registry=true in "addons-749116"
	I1009 22:56:08.333338 1543722 host.go:66] Checking if "addons-749116" exists ...
	I1009 22:56:08.333812 1543722 cli_runner.go:164] Run: docker container inspect addons-749116 --format={{.State.Status}}
	I1009 22:56:08.349345 1543722 addons.go:69] Setting gcp-auth=true in profile "addons-749116"
	I1009 22:56:08.349388 1543722 mustload.go:65] Loading cluster: addons-749116
	I1009 22:56:08.349595 1543722 config.go:182] Loaded profile config "addons-749116": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1009 22:56:08.349839 1543722 cli_runner.go:164] Run: docker container inspect addons-749116 --format={{.State.Status}}
	I1009 22:56:08.357596 1543722 addons.go:69] Setting storage-provisioner=true in profile "addons-749116"
	I1009 22:56:08.357677 1543722 addons.go:231] Setting addon storage-provisioner=true in "addons-749116"
	I1009 22:56:08.357746 1543722 host.go:66] Checking if "addons-749116" exists ...
	I1009 22:56:08.358223 1543722 cli_runner.go:164] Run: docker container inspect addons-749116 --format={{.State.Status}}
	I1009 22:56:08.375300 1543722 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-749116"
	I1009 22:56:08.375375 1543722 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-749116"
	I1009 22:56:08.375730 1543722 cli_runner.go:164] Run: docker container inspect addons-749116 --format={{.State.Status}}
	I1009 22:56:08.383282 1543722 addons.go:69] Setting ingress=true in profile "addons-749116"
	I1009 22:56:08.396882 1543722 addons.go:231] Setting addon ingress=true in "addons-749116"
	I1009 22:56:08.396947 1543722 host.go:66] Checking if "addons-749116" exists ...
	I1009 22:56:08.397401 1543722 cli_runner.go:164] Run: docker container inspect addons-749116 --format={{.State.Status}}
	I1009 22:56:08.383304 1543722 addons.go:69] Setting ingress-dns=true in profile "addons-749116"
	I1009 22:56:08.411854 1543722 addons.go:231] Setting addon ingress-dns=true in "addons-749116"
	I1009 22:56:08.411918 1543722 host.go:66] Checking if "addons-749116" exists ...
	I1009 22:56:08.412390 1543722 cli_runner.go:164] Run: docker container inspect addons-749116 --format={{.State.Status}}
	I1009 22:56:08.383311 1543722 addons.go:69] Setting inspektor-gadget=true in profile "addons-749116"
	I1009 22:56:08.431300 1543722 addons.go:231] Setting addon inspektor-gadget=true in "addons-749116"
	I1009 22:56:08.431356 1543722 host.go:66] Checking if "addons-749116" exists ...
	I1009 22:56:08.431795 1543722 cli_runner.go:164] Run: docker container inspect addons-749116 --format={{.State.Status}}
	I1009 22:56:08.546404 1543722 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1009 22:56:08.553470 1543722 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.11
	I1009 22:56:08.573514 1543722 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1009 22:56:08.573560 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1009 22:56:08.573626 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:56:08.577090 1543722 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1009 22:56:08.578777 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1009 22:56:08.578979 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:56:08.606352 1543722 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 22:56:08.608474 1543722 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 22:56:08.608497 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 22:56:08.608564 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:56:08.629530 1543722 out.go:177]   - Using image docker.io/registry:2.8.3
	I1009 22:56:08.623862 1543722 host.go:66] Checking if "addons-749116" exists ...
	I1009 22:56:08.604232 1543722 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1009 22:56:08.604236 1543722 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1009 22:56:08.605179 1543722 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-749116"
	I1009 22:56:08.604222 1543722 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.1
	I1009 22:56:08.624830 1543722 addons.go:231] Setting addon default-storageclass=true in "addons-749116"
	I1009 22:56:08.632531 1543722 host.go:66] Checking if "addons-749116" exists ...
	I1009 22:56:08.633036 1543722 cli_runner.go:164] Run: docker container inspect addons-749116 --format={{.State.Status}}
	I1009 22:56:08.636819 1543722 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1009 22:56:08.634950 1543722 host.go:66] Checking if "addons-749116" exists ...
	I1009 22:56:08.640770 1543722 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1009 22:56:08.641307 1543722 cli_runner.go:164] Run: docker container inspect addons-749116 --format={{.State.Status}}
	I1009 22:56:08.658813 1543722 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 22:56:08.658837 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 22:56:08.658905 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:56:08.668991 1543722 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 22:56:08.669014 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1009 22:56:08.669093 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:56:08.674301 1543722 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1009 22:56:08.725622 1543722 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1009 22:56:08.730876 1543722 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.1
	I1009 22:56:08.735372 1543722 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 22:56:08.735396 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I1009 22:56:08.735474 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:56:08.754293 1543722 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1009 22:56:08.754361 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1009 22:56:08.754460 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:56:08.767853 1543722 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.21.0
	I1009 22:56:08.770311 1543722 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1009 22:56:08.770336 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1009 22:56:08.770410 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:56:08.670767 1543722 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1009 22:56:08.783900 1543722 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1009 22:56:08.776439 1543722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa Username:docker}
	I1009 22:56:08.776548 1543722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa Username:docker}
	I1009 22:56:08.784365 1543722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa Username:docker}
	I1009 22:56:08.784461 1543722 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-749116" context rescaled to 1 replicas
	I1009 22:56:08.784539 1543722 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 22:56:08.787491 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 22:56:08.787562 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:56:08.793673 1543722 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1009 22:56:08.791746 1543722 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1009 22:56:08.793400 1543722 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 22:56:08.799761 1543722 out.go:177] * Verifying Kubernetes components...
	I1009 22:56:08.802158 1543722 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 22:56:08.802177 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1009 22:56:08.802241 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:56:08.802452 1543722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 22:56:08.806702 1543722 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1009 22:56:08.809014 1543722 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1009 22:56:08.807310 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 22:56:08.814092 1543722 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1009 22:56:08.816155 1543722 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1009 22:56:08.816176 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1009 22:56:08.816245 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:56:08.870064 1543722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa Username:docker}
	I1009 22:56:08.897626 1543722 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1009 22:56:08.900189 1543722 out.go:177]   - Using image docker.io/busybox:stable
	I1009 22:56:08.904446 1543722 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 22:56:08.904468 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1009 22:56:08.904537 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:56:08.911522 1543722 node_ready.go:35] waiting up to 6m0s for node "addons-749116" to be "Ready" ...
	I1009 22:56:08.941539 1543722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa Username:docker}
	I1009 22:56:08.962386 1543722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa Username:docker}
	I1009 22:56:08.976031 1543722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa Username:docker}
	I1009 22:56:08.988579 1543722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa Username:docker}
	I1009 22:56:08.992763 1543722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa Username:docker}
	I1009 22:56:09.025281 1543722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa Username:docker}
	I1009 22:56:09.030202 1543722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa Username:docker}
	I1009 22:56:09.061277 1543722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa Username:docker}
	I1009 22:56:09.261948 1543722 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1009 22:56:09.261970 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1009 22:56:09.273846 1543722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 22:56:09.290078 1543722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1009 22:56:09.330922 1543722 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1009 22:56:09.330984 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1009 22:56:09.362585 1543722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 22:56:09.451863 1543722 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1009 22:56:09.451889 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1009 22:56:09.454540 1543722 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 22:56:09.454572 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1009 22:56:09.506684 1543722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 22:56:09.527925 1543722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 22:56:09.570408 1543722 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1009 22:56:09.570440 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1009 22:56:09.573520 1543722 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1009 22:56:09.573581 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1009 22:56:09.596058 1543722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 22:56:09.598178 1543722 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1009 22:56:09.598202 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1009 22:56:09.648190 1543722 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1009 22:56:09.648226 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1009 22:56:09.661367 1543722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 22:56:09.663488 1543722 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 22:56:09.663527 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 22:56:09.778883 1543722 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1009 22:56:09.778910 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1009 22:56:09.780064 1543722 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1009 22:56:09.780087 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1009 22:56:09.810477 1543722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1009 22:56:09.844909 1543722 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1009 22:56:09.844937 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1009 22:56:09.876892 1543722 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 22:56:09.876918 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 22:56:10.038250 1543722 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 22:56:10.038277 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1009 22:56:10.045683 1543722 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1009 22:56:10.045714 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1009 22:56:10.092047 1543722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 22:56:10.092502 1543722 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1009 22:56:10.092532 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1009 22:56:10.173935 1543722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 22:56:10.258567 1543722 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1009 22:56:10.258593 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1009 22:56:10.287389 1543722 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1009 22:56:10.287414 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1009 22:56:10.405670 1543722 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1009 22:56:10.405698 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1009 22:56:10.411822 1543722 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1009 22:56:10.411858 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1009 22:56:10.499917 1543722 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1009 22:56:10.499951 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1009 22:56:10.527159 1543722 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1009 22:56:10.527187 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1009 22:56:10.586889 1543722 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1009 22:56:10.586924 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1009 22:56:10.632924 1543722 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1009 22:56:10.632957 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1009 22:56:10.684467 1543722 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1009 22:56:10.684492 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1009 22:56:10.713715 1543722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1009 22:56:10.884441 1543722 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1009 22:56:10.884503 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1009 22:56:11.092667 1543722 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 22:56:11.092696 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1009 22:56:11.204206 1543722 node_ready.go:58] node "addons-749116" has status "Ready":"False"
	I1009 22:56:11.262458 1543722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 22:56:11.379416 1543722 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.568211015s)
	I1009 22:56:11.379457 1543722 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1009 22:56:13.405695 1543722 node_ready.go:58] node "addons-749116" has status "Ready":"False"
	I1009 22:56:13.778521 1543722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.504640117s)
	I1009 22:56:13.778599 1543722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.488452834s)
	I1009 22:56:13.778682 1543722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.416025358s)
	I1009 22:56:13.953145 1543722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.446420333s)
	I1009 22:56:14.600576 1543722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.072605126s)
	I1009 22:56:14.600609 1543722 addons.go:467] Verifying addon ingress=true in "addons-749116"
	I1009 22:56:14.600678 1543722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.004592885s)
	I1009 22:56:14.600883 1543722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.939488878s)
	I1009 22:56:14.600940 1543722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.790436129s)
	I1009 22:56:14.600955 1543722 addons.go:467] Verifying addon registry=true in "addons-749116"
	I1009 22:56:14.602965 1543722 out.go:177] * Verifying registry addon...
	I1009 22:56:14.601368 1543722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.509289584s)
	I1009 22:56:14.601450 1543722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.427481723s)
	I1009 22:56:14.601516 1543722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.887760093s)
	I1009 22:56:14.605004 1543722 out.go:177] * Verifying ingress addon...
	I1009 22:56:14.608783 1543722 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1009 22:56:14.605088 1543722 addons.go:467] Verifying addon metrics-server=true in "addons-749116"
	W1009 22:56:14.605114 1543722 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 22:56:14.609099 1543722 retry.go:31] will retry after 140.029976ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 22:56:14.605983 1543722 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1009 22:56:14.639899 1543722 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1009 22:56:14.639930 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:14.651481 1543722 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 22:56:14.651512 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:14.660360 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:14.677386 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:14.750145 1543722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 22:56:14.997480 1543722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.734968542s)
	I1009 22:56:14.997518 1543722 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-749116"
	I1009 22:56:14.999981 1543722 out.go:177] * Verifying csi-hostpath-driver addon...
	I1009 22:56:15.003309 1543722 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1009 22:56:15.027156 1543722 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 22:56:15.027183 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:15.037259 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:15.178452 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:15.192258 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:15.542388 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:15.665911 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:15.679694 1543722 node_ready.go:58] node "addons-749116" has status "Ready":"False"
	I1009 22:56:15.684554 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:15.874932 1543722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.124738786s)
	I1009 22:56:16.042906 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:16.164749 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:16.181696 1543722 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1009 22:56:16.181798 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:56:16.183216 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:16.202755 1543722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa Username:docker}
	I1009 22:56:16.310707 1543722 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1009 22:56:16.336264 1543722 addons.go:231] Setting addon gcp-auth=true in "addons-749116"
	I1009 22:56:16.336315 1543722 host.go:66] Checking if "addons-749116" exists ...
	I1009 22:56:16.336808 1543722 cli_runner.go:164] Run: docker container inspect addons-749116 --format={{.State.Status}}
	I1009 22:56:16.355844 1543722 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1009 22:56:16.355938 1543722 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-749116
	I1009 22:56:16.374635 1543722 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/addons-749116/id_rsa Username:docker}
	I1009 22:56:16.470471 1543722 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I1009 22:56:16.472174 1543722 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1009 22:56:16.474221 1543722 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1009 22:56:16.474241 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1009 22:56:16.497190 1543722 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1009 22:56:16.497214 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1009 22:56:16.519459 1543722 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 22:56:16.519521 1543722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I1009 22:56:16.542972 1543722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 22:56:16.544723 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:16.666861 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:16.682586 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:17.042514 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:17.165408 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:17.182153 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:17.576306 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:17.643771 1543722 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.100759192s)
	I1009 22:56:17.645585 1543722 addons.go:467] Verifying addon gcp-auth=true in "addons-749116"
	I1009 22:56:17.647577 1543722 out.go:177] * Verifying gcp-auth addon...
	I1009 22:56:17.650297 1543722 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1009 22:56:17.728363 1543722 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1009 22:56:17.728392 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:17.728940 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:17.742568 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:17.743415 1543722 node_ready.go:58] node "addons-749116" has status "Ready":"False"
	I1009 22:56:17.746853 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:18.043129 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:18.165272 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:18.189448 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:18.250729 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:18.542898 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:18.665515 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:18.684570 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:18.750918 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:19.042186 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:19.165645 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:19.191018 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:19.252009 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:19.542868 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:19.667082 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:19.690139 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:19.751417 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:20.043768 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:20.165404 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:20.177516 1543722 node_ready.go:58] node "addons-749116" has status "Ready":"False"
	I1009 22:56:20.183920 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:20.251638 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:20.541739 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:20.665662 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:20.687105 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:20.751065 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:21.044514 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:21.165444 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:21.184743 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:21.251683 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:21.542935 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:21.667498 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:21.681953 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:21.751719 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:22.042636 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:22.165184 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:22.182080 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:22.251226 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:22.543331 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:22.664743 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:22.676071 1543722 node_ready.go:58] node "addons-749116" has status "Ready":"False"
	I1009 22:56:22.681774 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:22.751067 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:23.042794 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:23.164976 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:23.182818 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:23.251111 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:23.542040 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:23.665549 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:23.681442 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:23.750555 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:24.042027 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:24.164722 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:24.182522 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:24.250729 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:24.541944 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:24.664577 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:24.677245 1543722 node_ready.go:58] node "addons-749116" has status "Ready":"False"
	I1009 22:56:24.681911 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:24.751147 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:25.042975 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:25.165578 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:25.181814 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:25.250422 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:25.541667 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:25.668642 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:25.687735 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:25.751316 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:26.043697 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:26.165566 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:26.181408 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:26.250938 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:26.541499 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:26.665524 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:26.681762 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:26.750850 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:27.043090 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:27.164670 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:27.177263 1543722 node_ready.go:58] node "addons-749116" has status "Ready":"False"
	I1009 22:56:27.181593 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:27.250587 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:27.542523 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:27.665469 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:27.682349 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:27.750599 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:28.042412 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:28.164851 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:28.181369 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:28.250459 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:28.541846 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:28.665170 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:28.681111 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:28.750683 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:29.042210 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:29.165553 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:29.181300 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:29.250518 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:29.541962 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:29.665200 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:29.676499 1543722 node_ready.go:58] node "addons-749116" has status "Ready":"False"
	I1009 22:56:29.682441 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:29.750720 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:30.043361 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:30.165791 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:30.182047 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:30.251435 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:30.542227 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:30.665723 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:30.681939 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:30.750475 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:31.041611 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:31.164979 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:31.181575 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:31.251077 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:31.542502 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:31.664736 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:31.677254 1543722 node_ready.go:58] node "addons-749116" has status "Ready":"False"
	I1009 22:56:31.682027 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:31.751100 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:32.042234 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:32.165205 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:32.181540 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:32.251041 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:32.541675 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:32.665047 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:32.681286 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:32.750804 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:33.042816 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:33.165496 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:33.182107 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:33.250711 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:33.542948 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:33.665493 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:33.681465 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:33.751144 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:34.042589 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:34.164724 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:34.176871 1543722 node_ready.go:58] node "addons-749116" has status "Ready":"False"
	I1009 22:56:34.182432 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:34.250585 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:34.542361 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:34.664722 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:34.681735 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:34.750915 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:35.042513 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:35.165249 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:35.182455 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:35.251386 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:35.549871 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:35.665439 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:35.682084 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:35.751498 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:36.042565 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:36.165002 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:36.182442 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:36.251316 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:36.542953 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:36.664920 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:36.676253 1543722 node_ready.go:58] node "addons-749116" has status "Ready":"False"
	I1009 22:56:36.681609 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:36.751276 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:37.043071 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:37.164858 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:37.184137 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:37.250613 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:37.543498 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:37.664853 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:37.681852 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:37.750344 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:38.042422 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:38.165055 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:38.182289 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:38.250180 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:38.541859 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:38.665378 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:38.676732 1543722 node_ready.go:58] node "addons-749116" has status "Ready":"False"
	I1009 22:56:38.681592 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:38.751511 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:39.042604 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:39.165537 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:39.182036 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:39.250621 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:39.541413 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:39.664563 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:39.681910 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:39.800538 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:40.128212 1543722 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 22:56:40.128240 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:40.214375 1543722 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 22:56:40.214409 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:40.215288 1543722 node_ready.go:49] node "addons-749116" has status "Ready":"True"
	I1009 22:56:40.215313 1543722 node_ready.go:38] duration metric: took 31.303765878s waiting for node "addons-749116" to be "Ready" ...
	I1009 22:56:40.215325 1543722 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 22:56:40.216754 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:40.233708 1543722 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-pkb4x" in "kube-system" namespace to be "Ready" ...
	I1009 22:56:40.265966 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:40.544270 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:40.673726 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:40.684708 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:40.751619 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:41.053631 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:41.167308 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:41.184518 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:41.252262 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:41.544566 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:41.665208 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:41.683961 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:41.750872 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:42.067398 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:42.167101 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:42.187996 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:42.252717 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:42.312656 1543722 pod_ready.go:92] pod "coredns-5dd5756b68-pkb4x" in "kube-system" namespace has status "Ready":"True"
	I1009 22:56:42.312684 1543722 pod_ready.go:81] duration metric: took 2.078939836s waiting for pod "coredns-5dd5756b68-pkb4x" in "kube-system" namespace to be "Ready" ...
	I1009 22:56:42.312733 1543722 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-749116" in "kube-system" namespace to be "Ready" ...
	I1009 22:56:42.325828 1543722 pod_ready.go:92] pod "etcd-addons-749116" in "kube-system" namespace has status "Ready":"True"
	I1009 22:56:42.325857 1543722 pod_ready.go:81] duration metric: took 13.111311ms waiting for pod "etcd-addons-749116" in "kube-system" namespace to be "Ready" ...
	I1009 22:56:42.325873 1543722 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-749116" in "kube-system" namespace to be "Ready" ...
	I1009 22:56:42.349758 1543722 pod_ready.go:92] pod "kube-apiserver-addons-749116" in "kube-system" namespace has status "Ready":"True"
	I1009 22:56:42.349793 1543722 pod_ready.go:81] duration metric: took 23.911501ms waiting for pod "kube-apiserver-addons-749116" in "kube-system" namespace to be "Ready" ...
	I1009 22:56:42.349819 1543722 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-749116" in "kube-system" namespace to be "Ready" ...
	I1009 22:56:42.387773 1543722 pod_ready.go:92] pod "kube-controller-manager-addons-749116" in "kube-system" namespace has status "Ready":"True"
	I1009 22:56:42.387797 1543722 pod_ready.go:81] duration metric: took 37.969301ms waiting for pod "kube-controller-manager-addons-749116" in "kube-system" namespace to be "Ready" ...
	I1009 22:56:42.387814 1543722 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qkshl" in "kube-system" namespace to be "Ready" ...
	I1009 22:56:42.574347 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:42.595068 1543722 pod_ready.go:92] pod "kube-proxy-qkshl" in "kube-system" namespace has status "Ready":"True"
	I1009 22:56:42.595159 1543722 pod_ready.go:81] duration metric: took 207.336057ms waiting for pod "kube-proxy-qkshl" in "kube-system" namespace to be "Ready" ...
	I1009 22:56:42.595188 1543722 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-749116" in "kube-system" namespace to be "Ready" ...
	I1009 22:56:42.665950 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:42.683536 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:42.751345 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:42.978091 1543722 pod_ready.go:92] pod "kube-scheduler-addons-749116" in "kube-system" namespace has status "Ready":"True"
	I1009 22:56:42.978160 1543722 pod_ready.go:81] duration metric: took 382.934966ms waiting for pod "kube-scheduler-addons-749116" in "kube-system" namespace to be "Ready" ...
	I1009 22:56:42.978187 1543722 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-5s7nh" in "kube-system" namespace to be "Ready" ...
	I1009 22:56:43.044483 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:43.165463 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:43.182144 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:43.250936 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:43.543225 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:43.665846 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:43.685633 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:43.751785 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:44.043410 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:44.165395 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:44.184092 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:44.253213 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:44.544820 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:44.665670 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:44.683038 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:44.751601 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:45.054360 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:45.168368 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:45.183943 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:45.258696 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:45.306347 1543722 pod_ready.go:102] pod "metrics-server-7c66d45ddc-5s7nh" in "kube-system" namespace has status "Ready":"False"
	I1009 22:56:45.546036 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:45.666522 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:45.683351 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:45.751263 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:46.050479 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:46.165564 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:46.185465 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:46.251747 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:46.543200 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:46.666464 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:46.682347 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:46.751556 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:47.045353 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:47.166012 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:47.182504 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:47.251283 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:47.544551 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:47.665157 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:47.682926 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:47.751080 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:47.785739 1543722 pod_ready.go:102] pod "metrics-server-7c66d45ddc-5s7nh" in "kube-system" namespace has status "Ready":"False"
	I1009 22:56:48.079726 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:48.166350 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:48.182272 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:48.251111 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:48.543710 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:48.665527 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:48.682359 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:48.751191 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:49.069258 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:49.174019 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:49.183591 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:49.252447 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:49.553308 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:49.664824 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:49.682543 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:49.750534 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:49.786156 1543722 pod_ready.go:102] pod "metrics-server-7c66d45ddc-5s7nh" in "kube-system" namespace has status "Ready":"False"
	I1009 22:56:50.045232 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:50.165135 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:50.183165 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:50.252066 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:50.543269 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:50.667414 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:50.682647 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:50.751377 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:51.046199 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:51.165892 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:51.184533 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:51.251534 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:51.546596 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:51.666353 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:51.682605 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:51.751919 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:51.788415 1543722 pod_ready.go:102] pod "metrics-server-7c66d45ddc-5s7nh" in "kube-system" namespace has status "Ready":"False"
	I1009 22:56:52.053654 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:52.165907 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:52.183361 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:52.250719 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:52.545821 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:52.666894 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:52.683428 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:52.751421 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:53.047813 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:53.166253 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:53.183023 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:53.251850 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:53.546654 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:53.666002 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:53.685556 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:53.751459 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:53.794274 1543722 pod_ready.go:102] pod "metrics-server-7c66d45ddc-5s7nh" in "kube-system" namespace has status "Ready":"False"
	I1009 22:56:54.056759 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:54.165721 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:54.183524 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:54.253452 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:54.546101 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:54.678629 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:54.723711 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:54.753925 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:55.044835 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:55.179700 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:55.217185 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:55.298051 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:55.306164 1543722 pod_ready.go:92] pod "metrics-server-7c66d45ddc-5s7nh" in "kube-system" namespace has status "Ready":"True"
	I1009 22:56:55.306227 1543722 pod_ready.go:81] duration metric: took 12.328020085s waiting for pod "metrics-server-7c66d45ddc-5s7nh" in "kube-system" namespace to be "Ready" ...
	I1009 22:56:55.306255 1543722 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-q2tdr" in "kube-system" namespace to be "Ready" ...
	I1009 22:56:55.543776 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:55.665408 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:55.684489 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:55.754820 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:56.043881 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:56.169320 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:56.182911 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:56.250627 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:56.543340 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:56.665472 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:56.682710 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:56.751676 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:57.046158 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:57.166753 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:57.185277 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:57.261938 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:57.372289 1543722 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-q2tdr" in "kube-system" namespace has status "Ready":"False"
	I1009 22:56:57.544164 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:57.666444 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:57.683772 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:57.753714 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:58.045880 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:58.189655 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:58.215887 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:58.253880 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:58.543724 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:58.667594 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:58.683022 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:58.751098 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:59.043723 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:59.165465 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:59.183290 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:59.250921 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:59.542764 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:56:59.665970 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:56:59.683007 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:56:59.755328 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:56:59.870880 1543722 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-q2tdr" in "kube-system" namespace has status "Ready":"False"
	I1009 22:57:00.121657 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:00.176071 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:00.192794 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:00.258098 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:00.544045 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:00.665991 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:00.684238 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:00.750955 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:01.044015 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:01.165421 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:01.182319 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:01.251698 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:01.543447 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:01.666696 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:01.684532 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:01.751484 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:02.045403 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:02.167242 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:02.186275 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:02.251536 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:02.361437 1543722 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-q2tdr" in "kube-system" namespace has status "Ready":"False"
	I1009 22:57:02.550087 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:02.665730 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:02.682254 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:02.751270 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:03.043583 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:03.166495 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:03.186336 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:03.250984 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:03.543762 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:03.666572 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:03.682858 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:03.750544 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:04.047009 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:04.172929 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:04.183181 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:04.251085 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:04.543875 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:04.666914 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:04.684395 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:04.756524 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:04.864388 1543722 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-q2tdr" in "kube-system" namespace has status "Ready":"False"
	I1009 22:57:05.048549 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:05.169961 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:05.202747 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:05.251639 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:05.543766 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:05.665299 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:05.683289 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:05.751372 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:06.047874 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:06.177477 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:06.190689 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:06.251534 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:06.543600 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:06.667447 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:06.682978 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:06.750839 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:07.044008 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:07.166317 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:07.193022 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:07.251668 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:07.362234 1543722 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-q2tdr" in "kube-system" namespace has status "Ready":"False"
	I1009 22:57:07.544919 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:07.665462 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:07.683645 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:07.751238 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:08.044516 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:08.167584 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:08.183981 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:08.251772 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:08.551485 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:08.665411 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:08.682438 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:08.752644 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:09.044201 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:09.167147 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:09.183109 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:09.251102 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:09.544420 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:09.668809 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:09.683587 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:09.754408 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:09.863722 1543722 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-q2tdr" in "kube-system" namespace has status "Ready":"False"
	I1009 22:57:10.045012 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:10.166482 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:10.187172 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:10.251379 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:10.543616 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:10.666140 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:10.690605 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:10.752080 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:11.043944 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:11.169785 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:11.190491 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:11.251453 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:11.360389 1543722 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-q2tdr" in "kube-system" namespace has status "Ready":"True"
	I1009 22:57:11.360416 1543722 pod_ready.go:81] duration metric: took 16.054139459s waiting for pod "nvidia-device-plugin-daemonset-q2tdr" in "kube-system" namespace to be "Ready" ...
	I1009 22:57:11.360440 1543722 pod_ready.go:38] duration metric: took 31.145101074s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 22:57:11.360456 1543722 api_server.go:52] waiting for apiserver process to appear ...
	I1009 22:57:11.360483 1543722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 22:57:11.360548 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 22:57:11.412364 1543722 cri.go:89] found id: "c8a36523b66adffac6c7460d915eb992061251468a54b5a78ec92e4910d08e89"
	I1009 22:57:11.412386 1543722 cri.go:89] found id: ""
	I1009 22:57:11.412394 1543722 logs.go:284] 1 containers: [c8a36523b66adffac6c7460d915eb992061251468a54b5a78ec92e4910d08e89]
	I1009 22:57:11.412452 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:11.417219 1543722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 22:57:11.417313 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 22:57:11.461210 1543722 cri.go:89] found id: "92d19601d78ae84b6f32b973f79fb0ed3e89cb25c662b820e6d8ef81eb705e8b"
	I1009 22:57:11.461236 1543722 cri.go:89] found id: ""
	I1009 22:57:11.461244 1543722 logs.go:284] 1 containers: [92d19601d78ae84b6f32b973f79fb0ed3e89cb25c662b820e6d8ef81eb705e8b]
	I1009 22:57:11.461302 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:11.466180 1543722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 22:57:11.466284 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 22:57:11.512994 1543722 cri.go:89] found id: "4cc0f03b7b9d3ab899712d155cc00dab99e96faa4ac74af618e04263ad6f59fc"
	I1009 22:57:11.513016 1543722 cri.go:89] found id: ""
	I1009 22:57:11.513024 1543722 logs.go:284] 1 containers: [4cc0f03b7b9d3ab899712d155cc00dab99e96faa4ac74af618e04263ad6f59fc]
	I1009 22:57:11.513088 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:11.517657 1543722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 22:57:11.517806 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 22:57:11.543438 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:11.572950 1543722 cri.go:89] found id: "f2398f5cf10ec11d73b64dc3719508a681dc6d5ba320b0fb0743d21d07f66c1e"
	I1009 22:57:11.572970 1543722 cri.go:89] found id: ""
	I1009 22:57:11.572978 1543722 logs.go:284] 1 containers: [f2398f5cf10ec11d73b64dc3719508a681dc6d5ba320b0fb0743d21d07f66c1e]
	I1009 22:57:11.573071 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:11.577855 1543722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 22:57:11.577930 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 22:57:11.622694 1543722 cri.go:89] found id: "e551c71a85799a0e6ef4d54cc1525f1d3581d9369604475fb1dbf1179a52e6ad"
	I1009 22:57:11.622716 1543722 cri.go:89] found id: ""
	I1009 22:57:11.622724 1543722 logs.go:284] 1 containers: [e551c71a85799a0e6ef4d54cc1525f1d3581d9369604475fb1dbf1179a52e6ad]
	I1009 22:57:11.622782 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:11.627196 1543722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 22:57:11.627270 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 22:57:11.666835 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:11.672616 1543722 cri.go:89] found id: "d5571a725bbcc14072f18bd0f5c7c01c22ca9c60b0cce974a49ec776a9a6bc56"
	I1009 22:57:11.672639 1543722 cri.go:89] found id: ""
	I1009 22:57:11.672648 1543722 logs.go:284] 1 containers: [d5571a725bbcc14072f18bd0f5c7c01c22ca9c60b0cce974a49ec776a9a6bc56]
	I1009 22:57:11.672704 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:11.677328 1543722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 22:57:11.677429 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 22:57:11.683098 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:11.726146 1543722 cri.go:89] found id: "f2711ea8bd871058cfa1700a7f7ed89f457ee1cd3cdc0665a62b594b4cae144e"
	I1009 22:57:11.726170 1543722 cri.go:89] found id: ""
	I1009 22:57:11.726179 1543722 logs.go:284] 1 containers: [f2711ea8bd871058cfa1700a7f7ed89f457ee1cd3cdc0665a62b594b4cae144e]
	I1009 22:57:11.726236 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:11.730944 1543722 logs.go:123] Gathering logs for etcd [92d19601d78ae84b6f32b973f79fb0ed3e89cb25c662b820e6d8ef81eb705e8b] ...
	I1009 22:57:11.731017 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92d19601d78ae84b6f32b973f79fb0ed3e89cb25c662b820e6d8ef81eb705e8b"
	I1009 22:57:11.750921 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:11.789279 1543722 logs.go:123] Gathering logs for kube-proxy [e551c71a85799a0e6ef4d54cc1525f1d3581d9369604475fb1dbf1179a52e6ad] ...
	I1009 22:57:11.789309 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e551c71a85799a0e6ef4d54cc1525f1d3581d9369604475fb1dbf1179a52e6ad"
	I1009 22:57:11.837062 1543722 logs.go:123] Gathering logs for kube-controller-manager [d5571a725bbcc14072f18bd0f5c7c01c22ca9c60b0cce974a49ec776a9a6bc56] ...
	I1009 22:57:11.837091 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5571a725bbcc14072f18bd0f5c7c01c22ca9c60b0cce974a49ec776a9a6bc56"
	I1009 22:57:11.916134 1543722 logs.go:123] Gathering logs for kube-scheduler [f2398f5cf10ec11d73b64dc3719508a681dc6d5ba320b0fb0743d21d07f66c1e] ...
	I1009 22:57:11.916169 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2398f5cf10ec11d73b64dc3719508a681dc6d5ba320b0fb0743d21d07f66c1e"
	I1009 22:57:11.970527 1543722 logs.go:123] Gathering logs for kindnet [f2711ea8bd871058cfa1700a7f7ed89f457ee1cd3cdc0665a62b594b4cae144e] ...
	I1009 22:57:11.970560 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2711ea8bd871058cfa1700a7f7ed89f457ee1cd3cdc0665a62b594b4cae144e"
	I1009 22:57:12.026001 1543722 logs.go:123] Gathering logs for CRI-O ...
	I1009 22:57:12.026032 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 22:57:12.054246 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:12.129774 1543722 logs.go:123] Gathering logs for kubelet ...
	I1009 22:57:12.129817 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 22:57:12.171278 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:12.205227 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1009 22:57:12.233543 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:14 addons-749116 kubelet[1362]: W1009 22:56:14.149659    1362 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-749116' and this object
	W1009 22:57:12.233815 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:14 addons-749116 kubelet[1362]: E1009 22:56:14.149708    1362 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-749116' and this object
	W1009 22:57:12.238368 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.749075    1362 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-749116' and this object
	W1009 22:57:12.238628 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.749119    1362 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-749116' and this object
	W1009 22:57:12.238869 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.749287    1362 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-749116' and this object
	W1009 22:57:12.239108 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.749314    1362 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-749116' and this object
	W1009 22:57:12.239355 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.749442    1362 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-749116' and this object
	W1009 22:57:12.239574 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.749467    1362 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-749116' and this object
	W1009 22:57:12.241010 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.782523    1362 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-749116' and this object
	W1009 22:57:12.241267 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.782566    1362 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-749116' and this object
	W1009 22:57:12.241469 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.782523    1362 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-749116' and this object
	W1009 22:57:12.241723 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.782592    1362 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-749116' and this object
	W1009 22:57:12.241935 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.782595    1362 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-749116' and this object
	W1009 22:57:12.242135 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.782607    1362 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-749116' and this object
	W1009 22:57:12.249216 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.791007    1362 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-749116' and this object
	W1009 22:57:12.249457 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.791048    1362 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-749116' and this object
	W1009 22:57:12.249656 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.791991    1362 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-749116' and this object
	W1009 22:57:12.249883 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.792022    1362 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-749116' and this object
	I1009 22:57:12.251922 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:12.280262 1543722 logs.go:123] Gathering logs for dmesg ...
	I1009 22:57:12.280312 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 22:57:12.308646 1543722 logs.go:123] Gathering logs for describe nodes ...
	I1009 22:57:12.308681 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 22:57:12.551744 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:12.591687 1543722 logs.go:123] Gathering logs for kube-apiserver [c8a36523b66adffac6c7460d915eb992061251468a54b5a78ec92e4910d08e89] ...
	I1009 22:57:12.591727 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8a36523b66adffac6c7460d915eb992061251468a54b5a78ec92e4910d08e89"
	I1009 22:57:12.667190 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:12.683542 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:12.751894 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:12.801461 1543722 logs.go:123] Gathering logs for coredns [4cc0f03b7b9d3ab899712d155cc00dab99e96faa4ac74af618e04263ad6f59fc] ...
	I1009 22:57:12.801543 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc0f03b7b9d3ab899712d155cc00dab99e96faa4ac74af618e04263ad6f59fc"
	I1009 22:57:13.002721 1543722 logs.go:123] Gathering logs for container status ...
	I1009 22:57:13.002767 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 22:57:13.043650 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:13.166194 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:13.183424 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:13.253414 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:13.258664 1543722 out.go:309] Setting ErrFile to fd 2...
	I1009 22:57:13.258728 1543722 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1009 22:57:13.258814 1543722 out.go:239] X Problems detected in kubelet:
	W1009 22:57:13.258856 1543722 out.go:239]   Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.782607    1362 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-749116' and this object
	W1009 22:57:13.258891 1543722 out.go:239]   Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.791007    1362 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-749116' and this object
	W1009 22:57:13.258942 1543722 out.go:239]   Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.791048    1362 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-749116' and this object
	W1009 22:57:13.258975 1543722 out.go:239]   Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.791991    1362 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-749116' and this object
	W1009 22:57:13.259017 1543722 out.go:239]   Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.792022    1362 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-749116' and this object
	I1009 22:57:13.259050 1543722 out.go:309] Setting ErrFile to fd 2...
	I1009 22:57:13.259068 1543722 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 22:57:13.544125 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:13.666275 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:13.685167 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:13.751995 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:14.045332 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:14.169099 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:14.186109 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:14.252572 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:14.548138 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:14.666423 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:14.684984 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:14.751483 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:15.059843 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:15.197944 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:15.211234 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:15.253925 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:15.547924 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:15.666932 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:15.683071 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:15.750953 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:16.045413 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:16.165454 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:16.184303 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 22:57:16.252070 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:16.542813 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:16.667221 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:16.690700 1543722 kapi.go:107] duration metric: took 1m2.084713716s to wait for kubernetes.io/minikube-addons=registry ...
	I1009 22:57:16.752923 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:17.043615 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:17.165627 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:17.253659 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:17.613836 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:17.679356 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:17.762323 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:18.045298 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:18.178458 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:18.251272 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:18.543641 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:18.664667 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:18.751373 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:19.044345 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:19.165759 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:19.251229 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:19.544542 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:19.665900 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:19.750735 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:20.043819 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:20.166714 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:20.251962 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:20.545405 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:20.670226 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:20.752166 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:21.056331 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:21.167427 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:21.251663 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:21.561711 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:21.665906 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:21.752089 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:22.052247 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:22.167191 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:22.250681 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:22.545745 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:22.666020 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:22.751395 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:23.043523 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:23.165308 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:23.250943 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:23.259701 1543722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 22:57:23.277836 1543722 api_server.go:72] duration metric: took 1m14.481385287s to wait for apiserver process to appear ...
	I1009 22:57:23.277863 1543722 api_server.go:88] waiting for apiserver healthz status ...
	I1009 22:57:23.277915 1543722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 22:57:23.278002 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 22:57:23.351178 1543722 cri.go:89] found id: "c8a36523b66adffac6c7460d915eb992061251468a54b5a78ec92e4910d08e89"
	I1009 22:57:23.351203 1543722 cri.go:89] found id: ""
	I1009 22:57:23.351212 1543722 logs.go:284] 1 containers: [c8a36523b66adffac6c7460d915eb992061251468a54b5a78ec92e4910d08e89]
	I1009 22:57:23.351271 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:23.356013 1543722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 22:57:23.356127 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 22:57:23.418848 1543722 cri.go:89] found id: "92d19601d78ae84b6f32b973f79fb0ed3e89cb25c662b820e6d8ef81eb705e8b"
	I1009 22:57:23.418924 1543722 cri.go:89] found id: ""
	I1009 22:57:23.418945 1543722 logs.go:284] 1 containers: [92d19601d78ae84b6f32b973f79fb0ed3e89cb25c662b820e6d8ef81eb705e8b]
	I1009 22:57:23.419037 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:23.423983 1543722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 22:57:23.424058 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 22:57:23.468032 1543722 cri.go:89] found id: "4cc0f03b7b9d3ab899712d155cc00dab99e96faa4ac74af618e04263ad6f59fc"
	I1009 22:57:23.468094 1543722 cri.go:89] found id: ""
	I1009 22:57:23.468119 1543722 logs.go:284] 1 containers: [4cc0f03b7b9d3ab899712d155cc00dab99e96faa4ac74af618e04263ad6f59fc]
	I1009 22:57:23.468206 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:23.472863 1543722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 22:57:23.472961 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 22:57:23.527500 1543722 cri.go:89] found id: "f2398f5cf10ec11d73b64dc3719508a681dc6d5ba320b0fb0743d21d07f66c1e"
	I1009 22:57:23.527524 1543722 cri.go:89] found id: ""
	I1009 22:57:23.527532 1543722 logs.go:284] 1 containers: [f2398f5cf10ec11d73b64dc3719508a681dc6d5ba320b0fb0743d21d07f66c1e]
	I1009 22:57:23.527590 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:23.532503 1543722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 22:57:23.532616 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 22:57:23.544025 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:23.582314 1543722 cri.go:89] found id: "e551c71a85799a0e6ef4d54cc1525f1d3581d9369604475fb1dbf1179a52e6ad"
	I1009 22:57:23.582355 1543722 cri.go:89] found id: ""
	I1009 22:57:23.582364 1543722 logs.go:284] 1 containers: [e551c71a85799a0e6ef4d54cc1525f1d3581d9369604475fb1dbf1179a52e6ad]
	I1009 22:57:23.582429 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:23.587157 1543722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 22:57:23.587236 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 22:57:23.633469 1543722 cri.go:89] found id: "d5571a725bbcc14072f18bd0f5c7c01c22ca9c60b0cce974a49ec776a9a6bc56"
	I1009 22:57:23.633490 1543722 cri.go:89] found id: ""
	I1009 22:57:23.633499 1543722 logs.go:284] 1 containers: [d5571a725bbcc14072f18bd0f5c7c01c22ca9c60b0cce974a49ec776a9a6bc56]
	I1009 22:57:23.633556 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:23.638240 1543722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 22:57:23.638315 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 22:57:23.668639 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:23.686544 1543722 cri.go:89] found id: "f2711ea8bd871058cfa1700a7f7ed89f457ee1cd3cdc0665a62b594b4cae144e"
	I1009 22:57:23.686564 1543722 cri.go:89] found id: ""
	I1009 22:57:23.686572 1543722 logs.go:284] 1 containers: [f2711ea8bd871058cfa1700a7f7ed89f457ee1cd3cdc0665a62b594b4cae144e]
	I1009 22:57:23.686630 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:23.691601 1543722 logs.go:123] Gathering logs for kindnet [f2711ea8bd871058cfa1700a7f7ed89f457ee1cd3cdc0665a62b594b4cae144e] ...
	I1009 22:57:23.691626 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2711ea8bd871058cfa1700a7f7ed89f457ee1cd3cdc0665a62b594b4cae144e"
	I1009 22:57:23.735199 1543722 logs.go:123] Gathering logs for kubelet ...
	I1009 22:57:23.735235 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1009 22:57:23.751765 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1009 22:57:23.801847 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:14 addons-749116 kubelet[1362]: W1009 22:56:14.149659    1362 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-749116' and this object
	W1009 22:57:23.802099 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:14 addons-749116 kubelet[1362]: E1009 22:56:14.149708    1362 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-749116' and this object
	W1009 22:57:23.805849 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.749075    1362 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-749116' and this object
	W1009 22:57:23.806080 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.749119    1362 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-749116' and this object
	W1009 22:57:23.806288 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.749287    1362 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-749116' and this object
	W1009 22:57:23.806524 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.749314    1362 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-749116' and this object
	W1009 22:57:23.806717 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.749442    1362 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-749116' and this object
	W1009 22:57:23.806926 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.749467    1362 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-749116' and this object
	W1009 22:57:23.808196 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.782523    1362 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-749116' and this object
	W1009 22:57:23.808404 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.782566    1362 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-749116' and this object
	W1009 22:57:23.808602 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.782523    1362 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-749116' and this object
	W1009 22:57:23.808826 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.782592    1362 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-749116' and this object
	W1009 22:57:23.809012 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.782595    1362 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-749116' and this object
	W1009 22:57:23.809221 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.782607    1362 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-749116' and this object
	W1009 22:57:23.814784 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.791007    1362 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-749116' and this object
	W1009 22:57:23.814993 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.791048    1362 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-749116' and this object
	W1009 22:57:23.815240 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.791991    1362 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-749116' and this object
	W1009 22:57:23.815470 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.792022    1362 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-749116' and this object
	I1009 22:57:23.844878 1543722 logs.go:123] Gathering logs for dmesg ...
	I1009 22:57:23.844907 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 22:57:23.870954 1543722 logs.go:123] Gathering logs for etcd [92d19601d78ae84b6f32b973f79fb0ed3e89cb25c662b820e6d8ef81eb705e8b] ...
	I1009 22:57:23.870986 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92d19601d78ae84b6f32b973f79fb0ed3e89cb25c662b820e6d8ef81eb705e8b"
	I1009 22:57:23.943615 1543722 logs.go:123] Gathering logs for kube-scheduler [f2398f5cf10ec11d73b64dc3719508a681dc6d5ba320b0fb0743d21d07f66c1e] ...
	I1009 22:57:23.943651 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2398f5cf10ec11d73b64dc3719508a681dc6d5ba320b0fb0743d21d07f66c1e"
	I1009 22:57:23.998991 1543722 logs.go:123] Gathering logs for kube-proxy [e551c71a85799a0e6ef4d54cc1525f1d3581d9369604475fb1dbf1179a52e6ad] ...
	I1009 22:57:23.999027 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e551c71a85799a0e6ef4d54cc1525f1d3581d9369604475fb1dbf1179a52e6ad"
	I1009 22:57:24.049532 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:24.057532 1543722 logs.go:123] Gathering logs for kube-controller-manager [d5571a725bbcc14072f18bd0f5c7c01c22ca9c60b0cce974a49ec776a9a6bc56] ...
	I1009 22:57:24.057569 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5571a725bbcc14072f18bd0f5c7c01c22ca9c60b0cce974a49ec776a9a6bc56"
	I1009 22:57:24.162394 1543722 logs.go:123] Gathering logs for CRI-O ...
	I1009 22:57:24.162426 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 22:57:24.167849 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:24.251207 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:24.258783 1543722 logs.go:123] Gathering logs for container status ...
	I1009 22:57:24.258817 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 22:57:24.331112 1543722 logs.go:123] Gathering logs for describe nodes ...
	I1009 22:57:24.331167 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 22:57:24.521083 1543722 logs.go:123] Gathering logs for kube-apiserver [c8a36523b66adffac6c7460d915eb992061251468a54b5a78ec92e4910d08e89] ...
	I1009 22:57:24.521113 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8a36523b66adffac6c7460d915eb992061251468a54b5a78ec92e4910d08e89"
	I1009 22:57:24.547315 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:24.625733 1543722 logs.go:123] Gathering logs for coredns [4cc0f03b7b9d3ab899712d155cc00dab99e96faa4ac74af618e04263ad6f59fc] ...
	I1009 22:57:24.625766 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc0f03b7b9d3ab899712d155cc00dab99e96faa4ac74af618e04263ad6f59fc"
	I1009 22:57:24.666785 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:24.697073 1543722 out.go:309] Setting ErrFile to fd 2...
	I1009 22:57:24.697205 1543722 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1009 22:57:24.697281 1543722 out.go:239] X Problems detected in kubelet:
	W1009 22:57:24.697323 1543722 out.go:239]   Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.782607    1362 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-749116' and this object
	W1009 22:57:24.697357 1543722 out.go:239]   Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.791007    1362 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-749116' and this object
	W1009 22:57:24.697389 1543722 out.go:239]   Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.791048    1362 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-749116' and this object
	W1009 22:57:24.697421 1543722 out.go:239]   Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.791991    1362 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-749116' and this object
	W1009 22:57:24.697453 1543722 out.go:239]   Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.792022    1362 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-749116' and this object
	I1009 22:57:24.697483 1543722 out.go:309] Setting ErrFile to fd 2...
	I1009 22:57:24.697510 1543722 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 22:57:24.750915 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:25.057605 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:25.167230 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:25.251093 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:25.544171 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:25.666414 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:25.751195 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:26.044983 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:26.166132 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:26.251092 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 22:57:26.543706 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:26.665440 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:26.752912 1543722 kapi.go:107] duration metric: took 1m9.102615022s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1009 22:57:26.756979 1543722 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-749116 cluster.
	I1009 22:57:26.759508 1543722 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1009 22:57:26.761226 1543722 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1009 22:57:27.043591 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:27.167705 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:27.543917 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:27.665373 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:28.043423 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:28.164823 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:28.543341 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:28.665545 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:29.043271 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:29.166487 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:29.544469 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:29.666023 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:30.072914 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:30.166057 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:30.556106 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:30.665685 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:31.048569 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:31.165430 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:31.544064 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:31.666733 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:32.044226 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:32.166153 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:32.547733 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:32.666517 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:33.047716 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:33.166093 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:33.544758 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:33.665309 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:34.043818 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:34.165281 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:34.544389 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:34.665448 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:34.698729 1543722 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1009 22:57:34.709926 1543722 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1009 22:57:34.711641 1543722 api_server.go:141] control plane version: v1.28.2
	I1009 22:57:34.711710 1543722 api_server.go:131] duration metric: took 11.433839847s to wait for apiserver health ...
	I1009 22:57:34.711733 1543722 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 22:57:34.711781 1543722 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 22:57:34.711873 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 22:57:34.784878 1543722 cri.go:89] found id: "c8a36523b66adffac6c7460d915eb992061251468a54b5a78ec92e4910d08e89"
	I1009 22:57:34.784952 1543722 cri.go:89] found id: ""
	I1009 22:57:34.784982 1543722 logs.go:284] 1 containers: [c8a36523b66adffac6c7460d915eb992061251468a54b5a78ec92e4910d08e89]
	I1009 22:57:34.785072 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:34.791795 1543722 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 22:57:34.791916 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 22:57:34.873174 1543722 cri.go:89] found id: "92d19601d78ae84b6f32b973f79fb0ed3e89cb25c662b820e6d8ef81eb705e8b"
	I1009 22:57:34.873246 1543722 cri.go:89] found id: ""
	I1009 22:57:34.873266 1543722 logs.go:284] 1 containers: [92d19601d78ae84b6f32b973f79fb0ed3e89cb25c662b820e6d8ef81eb705e8b]
	I1009 22:57:34.873355 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:34.879268 1543722 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 22:57:34.879398 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 22:57:34.958009 1543722 cri.go:89] found id: "4cc0f03b7b9d3ab899712d155cc00dab99e96faa4ac74af618e04263ad6f59fc"
	I1009 22:57:34.958068 1543722 cri.go:89] found id: ""
	I1009 22:57:34.958098 1543722 logs.go:284] 1 containers: [4cc0f03b7b9d3ab899712d155cc00dab99e96faa4ac74af618e04263ad6f59fc]
	I1009 22:57:34.958187 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:34.964610 1543722 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 22:57:34.964767 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 22:57:35.047903 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:35.066829 1543722 cri.go:89] found id: "f2398f5cf10ec11d73b64dc3719508a681dc6d5ba320b0fb0743d21d07f66c1e"
	I1009 22:57:35.066902 1543722 cri.go:89] found id: ""
	I1009 22:57:35.066937 1543722 logs.go:284] 1 containers: [f2398f5cf10ec11d73b64dc3719508a681dc6d5ba320b0fb0743d21d07f66c1e]
	I1009 22:57:35.067052 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:35.077273 1543722 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 22:57:35.077403 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 22:57:35.146599 1543722 cri.go:89] found id: "e551c71a85799a0e6ef4d54cc1525f1d3581d9369604475fb1dbf1179a52e6ad"
	I1009 22:57:35.146673 1543722 cri.go:89] found id: ""
	I1009 22:57:35.146707 1543722 logs.go:284] 1 containers: [e551c71a85799a0e6ef4d54cc1525f1d3581d9369604475fb1dbf1179a52e6ad]
	I1009 22:57:35.146792 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:35.162967 1543722 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 22:57:35.163098 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 22:57:35.166987 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:35.236015 1543722 cri.go:89] found id: "d5571a725bbcc14072f18bd0f5c7c01c22ca9c60b0cce974a49ec776a9a6bc56"
	I1009 22:57:35.236084 1543722 cri.go:89] found id: ""
	I1009 22:57:35.236106 1543722 logs.go:284] 1 containers: [d5571a725bbcc14072f18bd0f5c7c01c22ca9c60b0cce974a49ec776a9a6bc56]
	I1009 22:57:35.236201 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:35.241363 1543722 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 22:57:35.241484 1543722 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 22:57:35.307889 1543722 cri.go:89] found id: "f2711ea8bd871058cfa1700a7f7ed89f457ee1cd3cdc0665a62b594b4cae144e"
	I1009 22:57:35.307959 1543722 cri.go:89] found id: ""
	I1009 22:57:35.307981 1543722 logs.go:284] 1 containers: [f2711ea8bd871058cfa1700a7f7ed89f457ee1cd3cdc0665a62b594b4cae144e]
	I1009 22:57:35.308083 1543722 ssh_runner.go:195] Run: which crictl
	I1009 22:57:35.320367 1543722 logs.go:123] Gathering logs for kube-proxy [e551c71a85799a0e6ef4d54cc1525f1d3581d9369604475fb1dbf1179a52e6ad] ...
	I1009 22:57:35.320449 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e551c71a85799a0e6ef4d54cc1525f1d3581d9369604475fb1dbf1179a52e6ad"
	I1009 22:57:35.373441 1543722 logs.go:123] Gathering logs for CRI-O ...
	I1009 22:57:35.373470 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 22:57:35.481198 1543722 logs.go:123] Gathering logs for container status ...
	I1009 22:57:35.481282 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 22:57:35.551853 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:35.582025 1543722 logs.go:123] Gathering logs for kubelet ...
	I1009 22:57:35.582101 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1009 22:57:35.661782 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:14 addons-749116 kubelet[1362]: W1009 22:56:14.149659    1362 reflector.go:535] object-"gadget"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-749116' and this object
	W1009 22:57:35.662081 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:14 addons-749116 kubelet[1362]: E1009 22:56:14.149708    1362 reflector.go:147] object-"gadget"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "gadget": no relationship found between node 'addons-749116' and this object
	I1009 22:57:35.666227 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1009 22:57:35.667379 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.749075    1362 reflector.go:535] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-749116' and this object
	W1009 22:57:35.667626 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.749119    1362 reflector.go:147] object-"ingress-nginx"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-749116' and this object
	W1009 22:57:35.667834 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.749287    1362 reflector.go:535] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-749116' and this object
	W1009 22:57:35.668059 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.749314    1362 reflector.go:147] object-"ingress-nginx"/"ingress-nginx-admission": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-749116' and this object
	W1009 22:57:35.668254 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.749442    1362 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-749116' and this object
	W1009 22:57:35.668464 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.749467    1362 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-749116' and this object
	W1009 22:57:35.669710 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.782523    1362 reflector.go:535] object-"default"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-749116' and this object
	W1009 22:57:35.669914 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.782566    1362 reflector.go:147] object-"default"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'addons-749116' and this object
	W1009 22:57:35.670111 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.782523    1362 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-749116' and this object
	W1009 22:57:35.670331 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.782592    1362 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'addons-749116' and this object
	W1009 22:57:35.670516 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.782595    1362 reflector.go:535] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-749116' and this object
	W1009 22:57:35.670720 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.782607    1362 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-749116' and this object
	W1009 22:57:35.676360 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.791007    1362 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-749116' and this object
	W1009 22:57:35.676646 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.791048    1362 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-749116' and this object
	W1009 22:57:35.676857 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.791991    1362 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-749116' and this object
	W1009 22:57:35.677085 1543722 logs.go:138] Found kubelet problem: Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.792022    1362 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-749116' and this object
	I1009 22:57:35.709102 1543722 logs.go:123] Gathering logs for dmesg ...
	I1009 22:57:35.709186 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 22:57:35.732679 1543722 logs.go:123] Gathering logs for describe nodes ...
	I1009 22:57:35.732751 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 22:57:35.884750 1543722 logs.go:123] Gathering logs for kube-apiserver [c8a36523b66adffac6c7460d915eb992061251468a54b5a78ec92e4910d08e89] ...
	I1009 22:57:35.884781 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c8a36523b66adffac6c7460d915eb992061251468a54b5a78ec92e4910d08e89"
	I1009 22:57:35.945464 1543722 logs.go:123] Gathering logs for kindnet [f2711ea8bd871058cfa1700a7f7ed89f457ee1cd3cdc0665a62b594b4cae144e] ...
	I1009 22:57:35.945498 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2711ea8bd871058cfa1700a7f7ed89f457ee1cd3cdc0665a62b594b4cae144e"
	I1009 22:57:36.033798 1543722 logs.go:123] Gathering logs for etcd [92d19601d78ae84b6f32b973f79fb0ed3e89cb25c662b820e6d8ef81eb705e8b] ...
	I1009 22:57:36.033841 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92d19601d78ae84b6f32b973f79fb0ed3e89cb25c662b820e6d8ef81eb705e8b"
	I1009 22:57:36.047503 1543722 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 22:57:36.166723 1543722 logs.go:123] Gathering logs for coredns [4cc0f03b7b9d3ab899712d155cc00dab99e96faa4ac74af618e04263ad6f59fc] ...
	I1009 22:57:36.166804 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4cc0f03b7b9d3ab899712d155cc00dab99e96faa4ac74af618e04263ad6f59fc"
	I1009 22:57:36.176143 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:36.234786 1543722 logs.go:123] Gathering logs for kube-scheduler [f2398f5cf10ec11d73b64dc3719508a681dc6d5ba320b0fb0743d21d07f66c1e] ...
	I1009 22:57:36.234826 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2398f5cf10ec11d73b64dc3719508a681dc6d5ba320b0fb0743d21d07f66c1e"
	I1009 22:57:36.288443 1543722 logs.go:123] Gathering logs for kube-controller-manager [d5571a725bbcc14072f18bd0f5c7c01c22ca9c60b0cce974a49ec776a9a6bc56] ...
	I1009 22:57:36.288474 1543722 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5571a725bbcc14072f18bd0f5c7c01c22ca9c60b0cce974a49ec776a9a6bc56"
	I1009 22:57:36.361356 1543722 out.go:309] Setting ErrFile to fd 2...
	I1009 22:57:36.361389 1543722 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W1009 22:57:36.361446 1543722 out.go:239] X Problems detected in kubelet:
	W1009 22:57:36.361456 1543722 out.go:239]   Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.782607    1362 reflector.go:147] object-"kube-system"/"gcp-auth": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-749116' and this object
	W1009 22:57:36.361464 1543722 out.go:239]   Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.791007    1362 reflector.go:535] object-"gcp-auth"/"gcp-auth-certs": failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-749116' and this object
	W1009 22:57:36.361474 1543722 out.go:239]   Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.791048    1362 reflector.go:147] object-"gcp-auth"/"gcp-auth-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "gcp-auth-certs" is forbidden: User "system:node:addons-749116" cannot list resource "secrets" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-749116' and this object
	W1009 22:57:36.361481 1543722 out.go:239]   Oct 09 22:56:39 addons-749116 kubelet[1362]: W1009 22:56:39.791991    1362 reflector.go:535] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-749116' and this object
	W1009 22:57:36.361488 1543722 out.go:239]   Oct 09 22:56:39 addons-749116 kubelet[1362]: E1009 22:56:39.792022    1362 reflector.go:147] object-"local-path-storage"/"local-path-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-749116" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-749116' and this object
	I1009 22:57:36.361501 1543722 out.go:309] Setting ErrFile to fd 2...
	I1009 22:57:36.361507 1543722 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 22:57:36.542748 1543722 kapi.go:107] duration metric: took 1m21.539443813s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1009 22:57:36.665130 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:37.165052 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:37.664887 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:38.164637 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:38.664848 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:39.165541 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:39.664797 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:40.165781 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:40.666037 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:41.165523 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:41.666148 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:42.166873 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:42.665448 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:43.165553 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:43.665527 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:44.165526 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:44.665360 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:45.171619 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:45.664875 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:46.173739 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:46.373821 1543722 system_pods.go:59] 18 kube-system pods found
	I1009 22:57:46.373857 1543722 system_pods.go:61] "coredns-5dd5756b68-pkb4x" [4a6c3267-8e66-4329-bbf6-d237997c227d] Running
	I1009 22:57:46.373865 1543722 system_pods.go:61] "csi-hostpath-attacher-0" [6f4b2a4a-e132-4821-b6e9-23f7b0f1c2d8] Running
	I1009 22:57:46.373870 1543722 system_pods.go:61] "csi-hostpath-resizer-0" [04d8b38a-4458-45a0-8c32-edf69cb4a3b7] Running
	I1009 22:57:46.373885 1543722 system_pods.go:61] "csi-hostpathplugin-2txjg" [7ca650bd-d8b8-43e8-969e-92a2f135e768] Running
	I1009 22:57:46.373890 1543722 system_pods.go:61] "etcd-addons-749116" [3e6e084f-4d2e-483b-b8f8-d8738703a8ea] Running
	I1009 22:57:46.373896 1543722 system_pods.go:61] "kindnet-vkmtc" [aacc7ea6-1499-45e8-b1d4-0aadc96715fb] Running
	I1009 22:57:46.373901 1543722 system_pods.go:61] "kube-apiserver-addons-749116" [5dcd813d-51fb-4de1-9bbc-eaa131d22c96] Running
	I1009 22:57:46.373906 1543722 system_pods.go:61] "kube-controller-manager-addons-749116" [e670e535-1c92-43f0-a1a6-add8b880a94d] Running
	I1009 22:57:46.373919 1543722 system_pods.go:61] "kube-ingress-dns-minikube" [16dc1d67-cd9b-41de-9204-d92d0ab447e8] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 22:57:46.373927 1543722 system_pods.go:61] "kube-proxy-qkshl" [f8281c18-69dc-4bb8-b876-75281a6b2944] Running
	I1009 22:57:46.373935 1543722 system_pods.go:61] "kube-scheduler-addons-749116" [b3048643-4496-40b2-9bca-595f7398c8af] Running
	I1009 22:57:46.373941 1543722 system_pods.go:61] "metrics-server-7c66d45ddc-5s7nh" [5f9b199b-4c0a-4b45-98d4-c13e2a5dc381] Running
	I1009 22:57:46.373946 1543722 system_pods.go:61] "nvidia-device-plugin-daemonset-q2tdr" [689841eb-bbcf-4415-9a4f-66a28c9b2621] Running
	I1009 22:57:46.373954 1543722 system_pods.go:61] "registry-fsrvj" [119f71e8-0a1b-4211-89c0-a57d00b658a4] Running
	I1009 22:57:46.373960 1543722 system_pods.go:61] "registry-proxy-2scnj" [48eb5a8b-2d5b-4709-9800-45a0a2ca64eb] Running
	I1009 22:57:46.373965 1543722 system_pods.go:61] "snapshot-controller-58dbcc7b99-dpks5" [59f5001a-3d42-4d69-a703-860b33097c7f] Running
	I1009 22:57:46.373973 1543722 system_pods.go:61] "snapshot-controller-58dbcc7b99-n227q" [930c85fd-132e-4e73-b80e-c34ccdd11381] Running
	I1009 22:57:46.373978 1543722 system_pods.go:61] "storage-provisioner" [b44d858f-5f28-44ce-b41e-75154f4af4ff] Running
	I1009 22:57:46.373984 1543722 system_pods.go:74] duration metric: took 11.662234954s to wait for pod list to return data ...
	I1009 22:57:46.373997 1543722 default_sa.go:34] waiting for default service account to be created ...
	I1009 22:57:46.376758 1543722 default_sa.go:45] found service account: "default"
	I1009 22:57:46.376789 1543722 default_sa.go:55] duration metric: took 2.786495ms for default service account to be created ...
	I1009 22:57:46.376799 1543722 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 22:57:46.386813 1543722 system_pods.go:86] 18 kube-system pods found
	I1009 22:57:46.386847 1543722 system_pods.go:89] "coredns-5dd5756b68-pkb4x" [4a6c3267-8e66-4329-bbf6-d237997c227d] Running
	I1009 22:57:46.386855 1543722 system_pods.go:89] "csi-hostpath-attacher-0" [6f4b2a4a-e132-4821-b6e9-23f7b0f1c2d8] Running
	I1009 22:57:46.386861 1543722 system_pods.go:89] "csi-hostpath-resizer-0" [04d8b38a-4458-45a0-8c32-edf69cb4a3b7] Running
	I1009 22:57:46.386867 1543722 system_pods.go:89] "csi-hostpathplugin-2txjg" [7ca650bd-d8b8-43e8-969e-92a2f135e768] Running
	I1009 22:57:46.386872 1543722 system_pods.go:89] "etcd-addons-749116" [3e6e084f-4d2e-483b-b8f8-d8738703a8ea] Running
	I1009 22:57:46.386878 1543722 system_pods.go:89] "kindnet-vkmtc" [aacc7ea6-1499-45e8-b1d4-0aadc96715fb] Running
	I1009 22:57:46.386883 1543722 system_pods.go:89] "kube-apiserver-addons-749116" [5dcd813d-51fb-4de1-9bbc-eaa131d22c96] Running
	I1009 22:57:46.386888 1543722 system_pods.go:89] "kube-controller-manager-addons-749116" [e670e535-1c92-43f0-a1a6-add8b880a94d] Running
	I1009 22:57:46.386897 1543722 system_pods.go:89] "kube-ingress-dns-minikube" [16dc1d67-cd9b-41de-9204-d92d0ab447e8] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1009 22:57:46.386904 1543722 system_pods.go:89] "kube-proxy-qkshl" [f8281c18-69dc-4bb8-b876-75281a6b2944] Running
	I1009 22:57:46.386910 1543722 system_pods.go:89] "kube-scheduler-addons-749116" [b3048643-4496-40b2-9bca-595f7398c8af] Running
	I1009 22:57:46.386915 1543722 system_pods.go:89] "metrics-server-7c66d45ddc-5s7nh" [5f9b199b-4c0a-4b45-98d4-c13e2a5dc381] Running
	I1009 22:57:46.386921 1543722 system_pods.go:89] "nvidia-device-plugin-daemonset-q2tdr" [689841eb-bbcf-4415-9a4f-66a28c9b2621] Running
	I1009 22:57:46.386926 1543722 system_pods.go:89] "registry-fsrvj" [119f71e8-0a1b-4211-89c0-a57d00b658a4] Running
	I1009 22:57:46.386930 1543722 system_pods.go:89] "registry-proxy-2scnj" [48eb5a8b-2d5b-4709-9800-45a0a2ca64eb] Running
	I1009 22:57:46.386935 1543722 system_pods.go:89] "snapshot-controller-58dbcc7b99-dpks5" [59f5001a-3d42-4d69-a703-860b33097c7f] Running
	I1009 22:57:46.386945 1543722 system_pods.go:89] "snapshot-controller-58dbcc7b99-n227q" [930c85fd-132e-4e73-b80e-c34ccdd11381] Running
	I1009 22:57:46.386950 1543722 system_pods.go:89] "storage-provisioner" [b44d858f-5f28-44ce-b41e-75154f4af4ff] Running
	I1009 22:57:46.386966 1543722 system_pods.go:126] duration metric: took 10.161041ms to wait for k8s-apps to be running ...
	I1009 22:57:46.386974 1543722 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 22:57:46.387048 1543722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 22:57:46.425678 1543722 system_svc.go:56] duration metric: took 38.6937ms WaitForService to wait for kubelet.
	I1009 22:57:46.425713 1543722 kubeadm.go:581] duration metric: took 1m37.629269457s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1009 22:57:46.425733 1543722 node_conditions.go:102] verifying NodePressure condition ...
	I1009 22:57:46.430070 1543722 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 22:57:46.430103 1543722 node_conditions.go:123] node cpu capacity is 2
	I1009 22:57:46.430124 1543722 node_conditions.go:105] duration metric: took 4.385581ms to run NodePressure ...
	I1009 22:57:46.430137 1543722 start.go:228] waiting for startup goroutines ...
	I1009 22:57:46.667761 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:47.165929 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:47.665914 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:48.166486 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:48.666320 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:49.165174 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:49.666602 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:50.168209 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:50.665085 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:51.167737 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:51.665798 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:52.166016 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:52.665006 1543722 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 22:57:53.165811 1543722 kapi.go:107] duration metric: took 1m38.557021506s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1009 22:57:53.172187 1543722 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, nvidia-device-plugin, storage-provisioner-rancher, ingress-dns, inspektor-gadget, metrics-server, default-storageclass, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1009 22:57:53.174629 1543722 addons.go:502] enable addons completed in 1m44.864374936s: enabled=[storage-provisioner cloud-spanner nvidia-device-plugin storage-provisioner-rancher ingress-dns inspektor-gadget metrics-server default-storageclass volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1009 22:57:53.174709 1543722 start.go:233] waiting for cluster config update ...
	I1009 22:57:53.174753 1543722 start.go:242] writing updated cluster config ...
	I1009 22:57:53.175089 1543722 ssh_runner.go:195] Run: rm -f paused
	I1009 22:57:53.314651 1543722 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1009 22:57:53.317009 1543722 out.go:177] * Done! kubectl is now configured to use "addons-749116" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 09 23:01:53 addons-749116 crio[891]: time="2023-10-09 23:01:53.382076580Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-5c4c674fdc-57kj7 Namespace:ingress-nginx ID:7ec43ec6f59db3b2d9383a5e29016e48853a69770e6fbbe50b0f85923cbcf0fa UID:dcfaa2e9-569e-4408-8337-141ea2613dc7 NetNS:/var/run/netns/e361c73a-a2a3-4fe0-ba04-a7589e87b64b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 09 23:01:53 addons-749116 crio[891]: time="2023-10-09 23:01:53.382220465Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-5c4c674fdc-57kj7 from CNI network \"kindnet\" (type=ptp)"
	Oct 09 23:01:53 addons-749116 crio[891]: time="2023-10-09 23:01:53.412778337Z" level=info msg="Stopped pod sandbox: 7ec43ec6f59db3b2d9383a5e29016e48853a69770e6fbbe50b0f85923cbcf0fa" id=76fed4a0-2fb6-44b9-8517-9401953cbf8c name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 23:01:53 addons-749116 crio[891]: time="2023-10-09 23:01:53.436578437Z" level=info msg="Removing container: dd37fad49b1166d5f383f8bc8342476466181c3d074836e61f9f4076d6834573" id=dbffa6bd-b839-4eda-8f01-188036d4ff50 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 23:01:53 addons-749116 crio[891]: time="2023-10-09 23:01:53.468202971Z" level=info msg="Removed container dd37fad49b1166d5f383f8bc8342476466181c3d074836e61f9f4076d6834573: ingress-nginx/ingress-nginx-controller-5c4c674fdc-57kj7/controller" id=dbffa6bd-b839-4eda-8f01-188036d4ff50 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 23:01:55 addons-749116 crio[891]: time="2023-10-09 23:01:55.666341742Z" level=info msg="Removing container: f149c3a280ca7eda1945dc1aac4df981932babb1478dbc80a70c8f1bf489e87f" id=761b11a5-98b9-4e03-87aa-e19a4fd41e58 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 23:01:55 addons-749116 crio[891]: time="2023-10-09 23:01:55.704276475Z" level=info msg="Removed container f149c3a280ca7eda1945dc1aac4df981932babb1478dbc80a70c8f1bf489e87f: ingress-nginx/ingress-nginx-admission-patch-j7jnr/patch" id=761b11a5-98b9-4e03-87aa-e19a4fd41e58 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 23:01:55 addons-749116 crio[891]: time="2023-10-09 23:01:55.705863893Z" level=info msg="Removing container: b30e74788a705406069f502eb03db0d914478b5457f3958e9b2512191daf5beb" id=d17f5de5-7e44-420a-b2dc-de14a30d7bc7 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 23:01:55 addons-749116 crio[891]: time="2023-10-09 23:01:55.740358598Z" level=info msg="Removed container b30e74788a705406069f502eb03db0d914478b5457f3958e9b2512191daf5beb: ingress-nginx/ingress-nginx-admission-create-9h2wc/create" id=d17f5de5-7e44-420a-b2dc-de14a30d7bc7 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 23:01:55 addons-749116 crio[891]: time="2023-10-09 23:01:55.742056458Z" level=info msg="Stopping pod sandbox: a08665ab545448cc87f52147c54880c41042a5bfaf312208fc6a63aefa0dbf33" id=45b122e3-e995-4cca-8ff7-fda9693ece0f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 23:01:55 addons-749116 crio[891]: time="2023-10-09 23:01:55.742097861Z" level=info msg="Stopped pod sandbox (already stopped): a08665ab545448cc87f52147c54880c41042a5bfaf312208fc6a63aefa0dbf33" id=45b122e3-e995-4cca-8ff7-fda9693ece0f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 23:01:55 addons-749116 crio[891]: time="2023-10-09 23:01:55.742375556Z" level=info msg="Removing pod sandbox: a08665ab545448cc87f52147c54880c41042a5bfaf312208fc6a63aefa0dbf33" id=bd0d0e0f-fdde-47b7-ba7f-e91b03e9efad name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 23:01:55 addons-749116 crio[891]: time="2023-10-09 23:01:55.751186497Z" level=info msg="Removed pod sandbox: a08665ab545448cc87f52147c54880c41042a5bfaf312208fc6a63aefa0dbf33" id=bd0d0e0f-fdde-47b7-ba7f-e91b03e9efad name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 23:01:55 addons-749116 crio[891]: time="2023-10-09 23:01:55.751786884Z" level=info msg="Stopping pod sandbox: a8a20f55961645b21d4863841030566e7603cd3c737ff0a3d245d43627a5e9d1" id=8ed19d9f-6b6a-41a6-beb6-d33133ad89a8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 23:01:55 addons-749116 crio[891]: time="2023-10-09 23:01:55.751825440Z" level=info msg="Stopped pod sandbox (already stopped): a8a20f55961645b21d4863841030566e7603cd3c737ff0a3d245d43627a5e9d1" id=8ed19d9f-6b6a-41a6-beb6-d33133ad89a8 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 23:01:55 addons-749116 crio[891]: time="2023-10-09 23:01:55.752126110Z" level=info msg="Removing pod sandbox: a8a20f55961645b21d4863841030566e7603cd3c737ff0a3d245d43627a5e9d1" id=2efc73c2-7376-41dd-99cc-f880fd0d1c85 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 23:01:55 addons-749116 crio[891]: time="2023-10-09 23:01:55.761225716Z" level=info msg="Removed pod sandbox: a8a20f55961645b21d4863841030566e7603cd3c737ff0a3d245d43627a5e9d1" id=2efc73c2-7376-41dd-99cc-f880fd0d1c85 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 23:01:55 addons-749116 crio[891]: time="2023-10-09 23:01:55.761700007Z" level=info msg="Stopping pod sandbox: 7ec43ec6f59db3b2d9383a5e29016e48853a69770e6fbbe50b0f85923cbcf0fa" id=36417ff0-dce1-437c-a506-fa5dc889ac18 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 23:01:55 addons-749116 crio[891]: time="2023-10-09 23:01:55.761742846Z" level=info msg="Stopped pod sandbox (already stopped): 7ec43ec6f59db3b2d9383a5e29016e48853a69770e6fbbe50b0f85923cbcf0fa" id=36417ff0-dce1-437c-a506-fa5dc889ac18 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 23:01:55 addons-749116 crio[891]: time="2023-10-09 23:01:55.762136332Z" level=info msg="Removing pod sandbox: 7ec43ec6f59db3b2d9383a5e29016e48853a69770e6fbbe50b0f85923cbcf0fa" id=ddcbe6dc-6743-4a51-a48a-246635758e47 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 23:01:55 addons-749116 crio[891]: time="2023-10-09 23:01:55.769964656Z" level=info msg="Removed pod sandbox: 7ec43ec6f59db3b2d9383a5e29016e48853a69770e6fbbe50b0f85923cbcf0fa" id=ddcbe6dc-6743-4a51-a48a-246635758e47 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 23:01:55 addons-749116 crio[891]: time="2023-10-09 23:01:55.770422217Z" level=info msg="Stopping pod sandbox: df15217b439e2b0a9318b712e2451184de3cc23a825c517ede1e3440ee9fadfe" id=e9c6d822-b8b7-4570-8d77-21d5623d2622 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 23:01:55 addons-749116 crio[891]: time="2023-10-09 23:01:55.770459706Z" level=info msg="Stopped pod sandbox (already stopped): df15217b439e2b0a9318b712e2451184de3cc23a825c517ede1e3440ee9fadfe" id=e9c6d822-b8b7-4570-8d77-21d5623d2622 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 23:01:55 addons-749116 crio[891]: time="2023-10-09 23:01:55.770758595Z" level=info msg="Removing pod sandbox: df15217b439e2b0a9318b712e2451184de3cc23a825c517ede1e3440ee9fadfe" id=70fb8eff-f018-42cb-ba8d-e5f0ecf65400 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 23:01:55 addons-749116 crio[891]: time="2023-10-09 23:01:55.783936007Z" level=info msg="Removed pod sandbox: df15217b439e2b0a9318b712e2451184de3cc23a825c517ede1e3440ee9fadfe" id=70fb8eff-f018-42cb-ba8d-e5f0ecf65400 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                          CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	550995ec34d54       97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4                                               7 seconds ago       Exited              hello-world-app           2                   20940983a7e1b       hello-world-app-5d77478584-2mmph
	a6f6111a3ed75       docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef                2 minutes ago       Running             nginx                     0                   637bf5cfb8e9f       nginx
	6a71a91d162e8       ghcr.io/headlamp-k8s/headlamp@sha256:8e813897da00c345b1169d624b32e2367e5da1dbbffe33226f8a92973b816b50          3 minutes ago       Running             headlamp                  0                   dda4f176a8f16       headlamp-94b766c-z5lhq
	1ae4f53a56584       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa   4 minutes ago       Running             gcp-auth                  0                   2dc920d175b13       gcp-auth-d4c87556c-vpj2d
	4cc0f03b7b9d3       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                               5 minutes ago       Running             coredns                   0                   ead19fac15007       coredns-5dd5756b68-pkb4x
	ae40edcaff5c1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                               5 minutes ago       Running             storage-provisioner       0                   fea61d0ae707b       storage-provisioner
	e551c71a85799       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa                                               5 minutes ago       Running             kube-proxy                0                   795b7af8414b2       kube-proxy-qkshl
	f2711ea8bd871       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                               5 minutes ago       Running             kindnet-cni               0                   e93aa4fec1ac4       kindnet-vkmtc
	d5571a725bbcc       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c                                               6 minutes ago       Running             kube-controller-manager   0                   1fa42bdf77650       kube-controller-manager-addons-749116
	f2398f5cf10ec       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7                                               6 minutes ago       Running             kube-scheduler            0                   1bdd1f2a1fe68       kube-scheduler-addons-749116
	92d19601d78ae       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                               6 minutes ago       Running             etcd                      0                   a2504723ce4fc       etcd-addons-749116
	c8a36523b66ad       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c                                               6 minutes ago       Running             kube-apiserver            0                   fd6f4985b8bb9       kube-apiserver-addons-749116
	
	* 
	* ==> coredns [4cc0f03b7b9d3ab899712d155cc00dab99e96faa4ac74af618e04263ad6f59fc] <==
	* [INFO] 10.244.0.19:43495 - 52665 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002035493s
	[INFO] 10.244.0.19:60856 - 24795 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000366992s
	[INFO] 10.244.0.19:60856 - 39797 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001177719s
	[INFO] 10.244.0.19:43495 - 15346 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002086612s
	[INFO] 10.244.0.19:60856 - 58882 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000919183s
	[INFO] 10.244.0.19:43495 - 12350 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000131266s
	[INFO] 10.244.0.19:60856 - 35270 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000045735s
	[INFO] 10.244.0.19:39518 - 10523 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000105584s
	[INFO] 10.244.0.19:45466 - 3279 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000047024s
	[INFO] 10.244.0.19:39518 - 11939 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000058913s
	[INFO] 10.244.0.19:45466 - 30473 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000164045s
	[INFO] 10.244.0.19:45466 - 27669 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000097559s
	[INFO] 10.244.0.19:39518 - 26384 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000081403s
	[INFO] 10.244.0.19:45466 - 18898 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00005422s
	[INFO] 10.244.0.19:39518 - 133 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043061s
	[INFO] 10.244.0.19:39518 - 49123 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000067053s
	[INFO] 10.244.0.19:45466 - 58752 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000098921s
	[INFO] 10.244.0.19:45466 - 48922 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000091398s
	[INFO] 10.244.0.19:39518 - 27870 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000077514s
	[INFO] 10.244.0.19:39518 - 56932 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000983092s
	[INFO] 10.244.0.19:45466 - 25843 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001229288s
	[INFO] 10.244.0.19:39518 - 60147 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000892048s
	[INFO] 10.244.0.19:45466 - 21742 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001229353s
	[INFO] 10.244.0.19:39518 - 63630 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000250306s
	[INFO] 10.244.0.19:45466 - 14331 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00003552s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-749116
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-749116
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90
	                    minikube.k8s.io/name=addons-749116
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_09T22_55_56_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-749116
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Oct 2023 22:55:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-749116
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Oct 2023 23:01:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Oct 2023 22:59:29 +0000   Mon, 09 Oct 2023 22:55:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Oct 2023 22:59:29 +0000   Mon, 09 Oct 2023 22:55:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Oct 2023 22:59:29 +0000   Mon, 09 Oct 2023 22:55:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Oct 2023 22:59:29 +0000   Mon, 09 Oct 2023 22:56:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-749116
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 cae09d20258140128251c9ad71240693
	  System UUID:                8cf30a7f-d531-47b6-8b16-2996e8ea4731
	  Boot ID:                    049a78d9-9f92-4a07-bf20-80a1aba53693
	  Kernel Version:             5.15.0-1047-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-2mmph         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m47s
	  gcp-auth                    gcp-auth-d4c87556c-vpj2d                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m41s
	  headlamp                    headlamp-94b766c-z5lhq                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	  kube-system                 coredns-5dd5756b68-pkb4x                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m51s
	  kube-system                 etcd-addons-749116                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m3s
	  kube-system                 kindnet-vkmtc                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m51s
	  kube-system                 kube-apiserver-addons-749116             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-controller-manager-addons-749116    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-proxy-qkshl                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  kube-system                 kube-scheduler-addons-749116             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m44s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m11s (x8 over 6m11s)  kubelet          Node addons-749116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m11s (x8 over 6m11s)  kubelet          Node addons-749116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m11s (x8 over 6m11s)  kubelet          Node addons-749116 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m3s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m3s                   kubelet          Node addons-749116 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s                   kubelet          Node addons-749116 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s                   kubelet          Node addons-749116 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m51s                  node-controller  Node addons-749116 event: Registered Node addons-749116 in Controller
	  Normal  NodeReady                5m19s                  kubelet          Node addons-749116 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000775] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=00000000cbdaf303{9p.inode} n=000000008c46acf3
	[  +0.001221] FS-Cache: N-key=[8] '4174ed0000000000'
	[  +0.002649] FS-Cache: Duplicate cookie detected
	[  +0.000797] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.001095] FS-Cache: O-cookie d=00000000cbdaf303{9p.inode} n=0000000051779926
	[  +0.001073] FS-Cache: O-key=[8] '4174ed0000000000'
	[  +0.000722] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001039] FS-Cache: N-cookie d=00000000cbdaf303{9p.inode} n=0000000049f111b5
	[  +0.001114] FS-Cache: N-key=[8] '4174ed0000000000'
	[  +2.529641] FS-Cache: Duplicate cookie detected
	[  +0.000735] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.000965] FS-Cache: O-cookie d=00000000cbdaf303{9p.inode} n=000000007a21eb82
	[  +0.001086] FS-Cache: O-key=[8] '4074ed0000000000'
	[  +0.000731] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000948] FS-Cache: N-cookie d=00000000cbdaf303{9p.inode} n=000000008c46acf3
	[  +0.001021] FS-Cache: N-key=[8] '4074ed0000000000'
	[  +0.346685] FS-Cache: Duplicate cookie detected
	[  +0.000775] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001029] FS-Cache: O-cookie d=00000000cbdaf303{9p.inode} n=000000008547c6a0
	[  +0.001112] FS-Cache: O-key=[8] '4674ed0000000000'
	[  +0.000729] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000945] FS-Cache: N-cookie d=00000000cbdaf303{9p.inode} n=00000000b53bc840
	[  +0.001191] FS-Cache: N-key=[8] '4674ed0000000000'
	[Oct 9 21:48] new mount options do not match the existing superblock, will be ignored
	
	* 
	* ==> etcd [92d19601d78ae84b6f32b973f79fb0ed3e89cb25c662b820e6d8ef81eb705e8b] <==
	* {"level":"info","ts":"2023-10-09T22:55:48.449369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-09T22:55:48.449419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2023-10-09T22:55:48.449463Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2023-10-09T22:55:48.449496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-09T22:55:48.449533Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2023-10-09T22:55:48.449566Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-10-09T22:55:48.455299Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-749116 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-09T22:55:48.455503Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-09T22:55:48.458258Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-09T22:55:48.458366Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-09T22:55:48.458415Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-09T22:55:48.458452Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-09T22:55:48.459147Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-09T22:55:48.46007Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-09T22:55:48.460347Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-10-09T22:55:48.461965Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-09T22:55:48.461989Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-09T22:56:09.583648Z","caller":"traceutil/trace.go:171","msg":"trace[535192582] linearizableReadLoop","detail":"{readStateIndex:418; appliedIndex:416; }","duration":"147.796179ms","start":"2023-10-09T22:56:09.435836Z","end":"2023-10-09T22:56:09.583633Z","steps":["trace[535192582] 'read index received'  (duration: 75.402213ms)","trace[535192582] 'applied index is now lower than readState.Index'  (duration: 72.393441ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-09T22:56:09.587297Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.430528ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-qkshl\" ","response":"range_response_count:1 size:4422"}
	{"level":"info","ts":"2023-10-09T22:56:09.588025Z","caller":"traceutil/trace.go:171","msg":"trace[319200359] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-qkshl; range_end:; response_count:1; response_revision:408; }","duration":"152.220111ms","start":"2023-10-09T22:56:09.435789Z","end":"2023-10-09T22:56:09.588009Z","steps":["trace[319200359] 'agreement among raft nodes before linearized reading'  (duration: 151.392613ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-09T22:56:09.588385Z","caller":"traceutil/trace.go:171","msg":"trace[183470291] transaction","detail":"{read_only:false; response_revision:407; number_of_response:1; }","duration":"207.700456ms","start":"2023-10-09T22:56:09.380673Z","end":"2023-10-09T22:56:09.588374Z","steps":["trace[183470291] 'process raft request'  (duration: 130.517314ms)","trace[183470291] 'compare'  (duration: 72.315738ms)"],"step_count":2}
	{"level":"info","ts":"2023-10-09T22:56:09.605539Z","caller":"traceutil/trace.go:171","msg":"trace[116062451] transaction","detail":"{read_only:false; response_revision:408; number_of_response:1; }","duration":"224.888424ms","start":"2023-10-09T22:56:09.380629Z","end":"2023-10-09T22:56:09.605517Z","steps":["trace[116062451] 'process raft request'  (duration: 202.967412ms)"],"step_count":1}
	{"level":"info","ts":"2023-10-09T22:56:11.908937Z","caller":"traceutil/trace.go:171","msg":"trace[668626520] transaction","detail":"{read_only:false; response_revision:424; number_of_response:1; }","duration":"108.507449ms","start":"2023-10-09T22:56:11.800413Z","end":"2023-10-09T22:56:11.90892Z","steps":["trace[668626520] 'process raft request'  (duration: 27.078476ms)","trace[668626520] 'compare'  (duration: 81.228701ms)"],"step_count":2}
	{"level":"warn","ts":"2023-10-09T22:56:12.603211Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.894456ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/default/\" range_end:\"/registry/limitranges/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-10-09T22:56:12.603369Z","caller":"traceutil/trace.go:171","msg":"trace[567158841] range","detail":"{range_begin:/registry/limitranges/default/; range_end:/registry/limitranges/default0; response_count:0; response_revision:434; }","duration":"110.063491ms","start":"2023-10-09T22:56:12.493292Z","end":"2023-10-09T22:56:12.603355Z","steps":["trace[567158841] 'agreement among raft nodes before linearized reading'  (duration: 35.038051ms)","trace[567158841] 'range keys from in-memory index tree'  (duration: 74.836033ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [1ae4f53a56584b55570a7d93982f6fe638e082207826668b7a27367790a691f3] <==
	* 2023/10/09 22:57:25 GCP Auth Webhook started!
	2023/10/09 22:58:02 Ready to marshal response ...
	2023/10/09 22:58:02 Ready to write response ...
	2023/10/09 22:58:03 Ready to marshal response ...
	2023/10/09 22:58:03 Ready to write response ...
	2023/10/09 22:58:19 Ready to marshal response ...
	2023/10/09 22:58:19 Ready to write response ...
	2023/10/09 22:58:19 Ready to marshal response ...
	2023/10/09 22:58:19 Ready to write response ...
	2023/10/09 22:58:24 Ready to marshal response ...
	2023/10/09 22:58:24 Ready to write response ...
	2023/10/09 22:58:27 Ready to marshal response ...
	2023/10/09 22:58:27 Ready to write response ...
	2023/10/09 22:58:49 Ready to marshal response ...
	2023/10/09 22:58:49 Ready to write response ...
	2023/10/09 22:58:49 Ready to marshal response ...
	2023/10/09 22:58:49 Ready to write response ...
	2023/10/09 22:58:49 Ready to marshal response ...
	2023/10/09 22:58:49 Ready to write response ...
	2023/10/09 22:59:11 Ready to marshal response ...
	2023/10/09 22:59:11 Ready to write response ...
	2023/10/09 23:01:32 Ready to marshal response ...
	2023/10/09 23:01:32 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:01:58 up  6:44,  0 users,  load average: 0.15, 0.94, 1.78
	Linux addons-749116 5.15.0-1047-aws #52~20.04.1-Ubuntu SMP Thu Sep 21 10:08:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [f2711ea8bd871058cfa1700a7f7ed89f457ee1cd3cdc0665a62b594b4cae144e] <==
	* I1009 22:59:49.539511       1 main.go:227] handling current node
	I1009 22:59:59.553357       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 22:59:59.553384       1 main.go:227] handling current node
	I1009 23:00:09.564735       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:00:09.564763       1 main.go:227] handling current node
	I1009 23:00:19.577411       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:00:19.577441       1 main.go:227] handling current node
	I1009 23:00:29.581359       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:00:29.581386       1 main.go:227] handling current node
	I1009 23:00:39.591673       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:00:39.591781       1 main.go:227] handling current node
	I1009 23:00:49.596458       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:00:49.596487       1 main.go:227] handling current node
	I1009 23:00:59.607635       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:00:59.607661       1 main.go:227] handling current node
	I1009 23:01:09.612016       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:01:09.612044       1 main.go:227] handling current node
	I1009 23:01:19.623472       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:01:19.623501       1 main.go:227] handling current node
	I1009 23:01:29.629173       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:01:29.629202       1 main.go:227] handling current node
	I1009 23:01:39.636812       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:01:39.636841       1 main.go:227] handling current node
	I1009 23:01:49.647639       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:01:49.647666       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [c8a36523b66adffac6c7460d915eb992061251468a54b5a78ec92e4910d08e89] <==
	* I1009 22:58:41.777867       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 22:58:41.778270       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 22:58:41.791075       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 22:58:41.791689       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 22:58:41.802684       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 22:58:41.802820       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 22:58:41.812609       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 22:58:41.813288       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 22:58:41.821598       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 22:58:41.821654       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 22:58:41.838682       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 22:58:41.838790       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1009 22:58:42.813424       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1009 22:58:42.839107       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1009 22:58:42.847781       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E1009 22:58:43.082588       1 authentication.go:70] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1009 22:58:49.111790       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.230.189"}
	I1009 22:58:51.661273       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1009 22:59:11.569211       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1009 22:59:11.976472       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.84.131"}
	I1009 22:59:12.036742       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I1009 22:59:12.057353       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1009 22:59:13.086087       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1009 22:59:56.203646       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1009 23:01:32.951462       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.220.95"}
	
	* 
	* ==> kube-controller-manager [d5571a725bbcc14072f18bd0f5c7c01c22ca9c60b0cce974a49ec776a9a6bc56] <==
	* W1009 23:01:14.894804       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 23:01:14.894836       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1009 23:01:24.243283       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 23:01:24.243318       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1009 23:01:29.959076       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 23:01:29.959108       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1009 23:01:32.707910       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1009 23:01:32.746073       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-2mmph"
	I1009 23:01:32.766509       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="57.934008ms"
	I1009 23:01:32.812004       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="45.37364ms"
	I1009 23:01:32.830912       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="18.789425ms"
	I1009 23:01:32.831137       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="76.858µs"
	I1009 23:01:36.411740       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="85.079µs"
	I1009 23:01:37.422707       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="69.276µs"
	I1009 23:01:38.411600       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="70.195µs"
	W1009 23:01:46.944183       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 23:01:46.944317       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1009 23:01:50.162407       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1009 23:01:50.166408       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5c4c674fdc" duration="6.695µs"
	I1009 23:01:50.174350       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1009 23:01:51.446460       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="43.176µs"
	W1009 23:01:53.031582       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 23:01:53.031625       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1009 23:01:57.525393       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 23:01:57.525426       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [e551c71a85799a0e6ef4d54cc1525f1d3581d9369604475fb1dbf1179a52e6ad] <==
	* I1009 22:56:12.925360       1 server_others.go:69] "Using iptables proxy"
	I1009 22:56:13.548798       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1009 22:56:13.959586       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 22:56:13.962540       1 server_others.go:152] "Using iptables Proxier"
	I1009 22:56:13.962654       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1009 22:56:13.962687       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1009 22:56:13.962790       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1009 22:56:13.963072       1 server.go:846] "Version info" version="v1.28.2"
	I1009 22:56:13.986941       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 22:56:13.987912       1 config.go:188] "Starting service config controller"
	I1009 22:56:13.988042       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1009 22:56:13.988106       1 config.go:97] "Starting endpoint slice config controller"
	I1009 22:56:13.988142       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1009 22:56:13.988707       1 config.go:315] "Starting node config controller"
	I1009 22:56:13.988768       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1009 22:56:14.088836       1 shared_informer.go:318] Caches are synced for node config
	I1009 22:56:14.093331       1 shared_informer.go:318] Caches are synced for service config
	I1009 22:56:14.093348       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [f2398f5cf10ec11d73b64dc3719508a681dc6d5ba320b0fb0743d21d07f66c1e] <==
	* W1009 22:55:51.917461       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 22:55:51.917509       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1009 22:55:51.917609       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 22:55:51.917668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1009 22:55:52.793544       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 22:55:52.793586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1009 22:55:52.806035       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1009 22:55:52.806144       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1009 22:55:52.840950       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 22:55:52.841060       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1009 22:55:52.863169       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1009 22:55:52.863207       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1009 22:55:52.929257       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 22:55:52.929489       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1009 22:55:52.929441       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1009 22:55:52.929606       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1009 22:55:52.942241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 22:55:52.942375       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1009 22:55:53.024338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 22:55:53.024455       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1009 22:55:53.028105       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 22:55:53.028224       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1009 22:55:53.166049       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1009 22:55:53.166196       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1009 22:55:56.160792       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.441830    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/0f3508ed32a1f3a48a337333fabe5d9807f7e18561f114b835b893e1df6bf3e6/diff" to get inode usage: stat /var/lib/containers/storage/overlay/0f3508ed32a1f3a48a337333fabe5d9807f7e18561f114b835b893e1df6bf3e6/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.444045    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2766f28f837f1e4fdd585f90a2807f5510b20a309e5284c2c124e74f4d5b0356/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2766f28f837f1e4fdd585f90a2807f5510b20a309e5284c2c124e74f4d5b0356/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.446226    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/621421d3da0d671042abbafc74cfe24d1dacb32c6786fc033961ba37d38a50ee/diff" to get inode usage: stat /var/lib/containers/storage/overlay/621421d3da0d671042abbafc74cfe24d1dacb32c6786fc033961ba37d38a50ee/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.446226    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/477f13ce320ddd31dcb82e6c69ecb023f42bc6f7871b54e7b084d3dae8556888/diff" to get inode usage: stat /var/lib/containers/storage/overlay/477f13ce320ddd31dcb82e6c69ecb023f42bc6f7871b54e7b084d3dae8556888/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.456641    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/4883642c451df1e65996f89586f10c3a85717cd49acb63d25a92e45822f5b534/diff" to get inode usage: stat /var/lib/containers/storage/overlay/4883642c451df1e65996f89586f10c3a85717cd49acb63d25a92e45822f5b534/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.456641    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d63ff7e34a16a0889d4efb82b89c1a0146f1f7dc967cd330de969e6b535a128f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d63ff7e34a16a0889d4efb82b89c1a0146f1f7dc967cd330de969e6b535a128f/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.462944    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f77f02e064d664f9d0b898e3d84c709582cd9e2f7d10269eda340d85148c0ed1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f77f02e064d664f9d0b898e3d84c709582cd9e2f7d10269eda340d85148c0ed1/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.462945    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/2766f28f837f1e4fdd585f90a2807f5510b20a309e5284c2c124e74f4d5b0356/diff" to get inode usage: stat /var/lib/containers/storage/overlay/2766f28f837f1e4fdd585f90a2807f5510b20a309e5284c2c124e74f4d5b0356/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.462960    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/a0e3c583ef27a97b901ba7c6c03cb9baddd28166f08d461291e52a336a16d198/diff" to get inode usage: stat /var/lib/containers/storage/overlay/a0e3c583ef27a97b901ba7c6c03cb9baddd28166f08d461291e52a336a16d198/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.462973    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/bc07dd43eab397d35c7698c250d85c8351a51d6dbd3b985820ef2739e84ffba9/diff" to get inode usage: stat /var/lib/containers/storage/overlay/bc07dd43eab397d35c7698c250d85c8351a51d6dbd3b985820ef2739e84ffba9/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.464431    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/384636e707bb1341db9f0fc88e812ffbe7890b0fdda173af839e99cac5ac3f56/diff" to get inode usage: stat /var/lib/containers/storage/overlay/384636e707bb1341db9f0fc88e812ffbe7890b0fdda173af839e99cac5ac3f56/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.470731    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e49914681963c920963ab953019165b2786fce50bd7388b07f478f4108c67bf1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e49914681963c920963ab953019165b2786fce50bd7388b07f478f4108c67bf1/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.471869    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/269f3e9346311492dc2a51136b0870052339a807690151b704a1547c246a6e5e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/269f3e9346311492dc2a51136b0870052339a807690151b704a1547c246a6e5e/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.475359    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/4883642c451df1e65996f89586f10c3a85717cd49acb63d25a92e45822f5b534/diff" to get inode usage: stat /var/lib/containers/storage/overlay/4883642c451df1e65996f89586f10c3a85717cd49acb63d25a92e45822f5b534/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.476513    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/269f3e9346311492dc2a51136b0870052339a807690151b704a1547c246a6e5e/diff" to get inode usage: stat /var/lib/containers/storage/overlay/269f3e9346311492dc2a51136b0870052339a807690151b704a1547c246a6e5e/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.478703    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e49914681963c920963ab953019165b2786fce50bd7388b07f478f4108c67bf1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e49914681963c920963ab953019165b2786fce50bd7388b07f478f4108c67bf1/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.480812    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/764900d2b01728b794468c58171c22966f15e5bf925c8edcf152ac24a2e162fa/diff" to get inode usage: stat /var/lib/containers/storage/overlay/764900d2b01728b794468c58171c22966f15e5bf925c8edcf152ac24a2e162fa/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.480821    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/8841a3e08c4bacd5ea55a65bf7a367b7a5eeb19b3a20eddf48c0022387fa7d24/diff" to get inode usage: stat /var/lib/containers/storage/overlay/8841a3e08c4bacd5ea55a65bf7a367b7a5eeb19b3a20eddf48c0022387fa7d24/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.483060    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/621421d3da0d671042abbafc74cfe24d1dacb32c6786fc033961ba37d38a50ee/diff" to get inode usage: stat /var/lib/containers/storage/overlay/621421d3da0d671042abbafc74cfe24d1dacb32c6786fc033961ba37d38a50ee/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.491981    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/764900d2b01728b794468c58171c22966f15e5bf925c8edcf152ac24a2e162fa/diff" to get inode usage: stat /var/lib/containers/storage/overlay/764900d2b01728b794468c58171c22966f15e5bf925c8edcf152ac24a2e162fa/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.501197    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f77f02e064d664f9d0b898e3d84c709582cd9e2f7d10269eda340d85148c0ed1/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f77f02e064d664f9d0b898e3d84c709582cd9e2f7d10269eda340d85148c0ed1/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.512751    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3569477c9daa60257f29ecc3ccf0b8640c431bf97fdaa27c2adc2ed7c55063d0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3569477c9daa60257f29ecc3ccf0b8640c431bf97fdaa27c2adc2ed7c55063d0/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: E1009 23:01:55.512751    1362 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/3569477c9daa60257f29ecc3ccf0b8640c431bf97fdaa27c2adc2ed7c55063d0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/3569477c9daa60257f29ecc3ccf0b8640c431bf97fdaa27c2adc2ed7c55063d0/diff: no such file or directory, extraDiskErr: <nil>
	Oct 09 23:01:55 addons-749116 kubelet[1362]: I1009 23:01:55.665010    1362 scope.go:117] "RemoveContainer" containerID="f149c3a280ca7eda1945dc1aac4df981932babb1478dbc80a70c8f1bf489e87f"
	Oct 09 23:01:55 addons-749116 kubelet[1362]: I1009 23:01:55.704559    1362 scope.go:117] "RemoveContainer" containerID="b30e74788a705406069f502eb03db0d914478b5457f3958e9b2512191daf5beb"
	
	* 
	* ==> storage-provisioner [ae40edcaff5c15de7e476c7e5c941dd5d001072bd624aa178a1d858859eaadd2] <==
	* I1009 22:56:40.475082       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 22:56:40.508771       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 22:56:40.508862       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 22:56:40.520313       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 22:56:40.520590       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"95a5f712-3256-4d4a-a269-115b2e3ee2ab", APIVersion:"v1", ResourceVersion:"874", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-749116_bd1668c4-9229-43d7-8356-078e654b7b19 became leader
	I1009 22:56:40.521645       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-749116_bd1668c4-9229-43d7-8356-078e654b7b19!
	I1009 22:56:40.622697       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-749116_bd1668c4-9229-43d7-8356-078e654b7b19!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-749116 -n addons-749116
helpers_test.go:261: (dbg) Run:  kubectl --context addons-749116 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (169.40s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (177.35s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-789037 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-789037 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.191302679s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-789037 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-789037 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7ec989fa-bf2c-496e-9383-5f64fb6aaa1e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7ec989fa-bf2c-496e-9383-5f64fb6aaa1e] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.017356163s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-789037 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1009 23:11:11.758456 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
E1009 23:11:11.763828 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
E1009 23:11:11.774153 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
E1009 23:11:11.794420 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
E1009 23:11:11.834715 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
E1009 23:11:11.915027 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
E1009 23:11:12.075486 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
E1009 23:11:12.396024 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
E1009 23:11:13.037022 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
E1009 23:11:14.317243 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
E1009 23:11:16.877494 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
E1009 23:11:21.997751 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
E1009 23:11:32.237998 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-789037 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.096160122s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-789037 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-789037 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E1009 23:11:52.718565 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.012155344s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-789037 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-789037 addons disable ingress-dns --alsologtostderr -v=1: (1.664727629s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-789037 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-789037 addons disable ingress --alsologtostderr -v=1: (7.843449108s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-789037
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-789037:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92a50b64d14a3758d17b870c4625b6b5f81921000fc6af2a82fdfb34403b196c",
	        "Created": "2023-10-09T23:07:41.684195108Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1572898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-09T23:07:42.056253141Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7c31788aee97084e64d3a410721295a10fc01c1f34b468c1bc9be09686708026",
	        "ResolvConfPath": "/var/lib/docker/containers/92a50b64d14a3758d17b870c4625b6b5f81921000fc6af2a82fdfb34403b196c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92a50b64d14a3758d17b870c4625b6b5f81921000fc6af2a82fdfb34403b196c/hostname",
	        "HostsPath": "/var/lib/docker/containers/92a50b64d14a3758d17b870c4625b6b5f81921000fc6af2a82fdfb34403b196c/hosts",
	        "LogPath": "/var/lib/docker/containers/92a50b64d14a3758d17b870c4625b6b5f81921000fc6af2a82fdfb34403b196c/92a50b64d14a3758d17b870c4625b6b5f81921000fc6af2a82fdfb34403b196c-json.log",
	        "Name": "/ingress-addon-legacy-789037",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-789037:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-789037",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/970e390c2b3162380d8508380fb3bd475dd4b9d821f51fbc36c6acd66028e968-init/diff:/var/lib/docker/overlay2/ef9093ba51e6eb88ff4b48fff9bf153334448175aa68f58581a9571eed9ca4f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/970e390c2b3162380d8508380fb3bd475dd4b9d821f51fbc36c6acd66028e968/merged",
	                "UpperDir": "/var/lib/docker/overlay2/970e390c2b3162380d8508380fb3bd475dd4b9d821f51fbc36c6acd66028e968/diff",
	                "WorkDir": "/var/lib/docker/overlay2/970e390c2b3162380d8508380fb3bd475dd4b9d821f51fbc36c6acd66028e968/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-789037",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-789037/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-789037",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-789037",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-789037",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a8436ba5276fa5f7cabcdca6c5e94076b7f1411429607475b648ea6241a5f448",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34374"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34373"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34370"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34372"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34371"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a8436ba5276f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-789037": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "92a50b64d14a",
	                        "ingress-addon-legacy-789037"
	                    ],
	                    "NetworkID": "97d81ddae3fb87efcfae6acd4fdfbd2167480037a39f94f35bcf4634b84bc06c",
	                    "EndpointID": "7242edbc695ad6a2fa73ab186a64fbb75343643cc6952d4540503cde97950306",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-789037 -n ingress-addon-legacy-789037
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-789037 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-789037 logs -n 25: (1.424095151s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| start          | -p functional-634060                 | functional-634060           | jenkins | v1.31.2 | 09 Oct 23 23:07 UTC |                     |
	|                | --dry-run --memory                   |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                             |         |         |                     |                     |
	|                | --driver=docker                      |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| service        | functional-634060 service            | functional-634060           | jenkins | v1.31.2 | 09 Oct 23 23:07 UTC | 09 Oct 23 23:07 UTC |
	|                | --namespace=default --https          |                             |         |         |                     |                     |
	|                | --url hello-node                     |                             |         |         |                     |                     |
	| start          | -p functional-634060                 | functional-634060           | jenkins | v1.31.2 | 09 Oct 23 23:07 UTC |                     |
	|                | --dry-run --alsologtostderr          |                             |         |         |                     |                     |
	|                | -v=1 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| service        | functional-634060                    | functional-634060           | jenkins | v1.31.2 | 09 Oct 23 23:07 UTC | 09 Oct 23 23:07 UTC |
	|                | service hello-node --url             |                             |         |         |                     |                     |
	|                | --format={{.IP}}                     |                             |         |         |                     |                     |
	| service        | functional-634060 service            | functional-634060           | jenkins | v1.31.2 | 09 Oct 23 23:07 UTC | 09 Oct 23 23:07 UTC |
	|                | hello-node --url                     |                             |         |         |                     |                     |
	| start          | -p functional-634060                 | functional-634060           | jenkins | v1.31.2 | 09 Oct 23 23:07 UTC |                     |
	|                | --dry-run --memory                   |                             |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                             |         |         |                     |                     |
	|                | --driver=docker                      |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| dashboard      | --url --port 36195                   | functional-634060           | jenkins | v1.31.2 | 09 Oct 23 23:07 UTC | 09 Oct 23 23:07 UTC |
	|                | -p functional-634060                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| update-context | functional-634060                    | functional-634060           | jenkins | v1.31.2 | 09 Oct 23 23:07 UTC | 09 Oct 23 23:07 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-634060                    | functional-634060           | jenkins | v1.31.2 | 09 Oct 23 23:07 UTC | 09 Oct 23 23:07 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| update-context | functional-634060                    | functional-634060           | jenkins | v1.31.2 | 09 Oct 23 23:07 UTC | 09 Oct 23 23:07 UTC |
	|                | update-context                       |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                             |         |         |                     |                     |
	| image          | functional-634060                    | functional-634060           | jenkins | v1.31.2 | 09 Oct 23 23:07 UTC | 09 Oct 23 23:07 UTC |
	|                | image ls --format short              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-634060                    | functional-634060           | jenkins | v1.31.2 | 09 Oct 23 23:07 UTC | 09 Oct 23 23:07 UTC |
	|                | image ls --format yaml               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| ssh            | functional-634060 ssh pgrep          | functional-634060           | jenkins | v1.31.2 | 09 Oct 23 23:07 UTC |                     |
	|                | buildkitd                            |                             |         |         |                     |                     |
	| image          | functional-634060 image build -t     | functional-634060           | jenkins | v1.31.2 | 09 Oct 23 23:07 UTC | 09 Oct 23 23:07 UTC |
	|                | localhost/my-image:functional-634060 |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                             |         |         |                     |                     |
	| image          | functional-634060 image ls           | functional-634060           | jenkins | v1.31.2 | 09 Oct 23 23:07 UTC | 09 Oct 23 23:07 UTC |
	| image          | functional-634060                    | functional-634060           | jenkins | v1.31.2 | 09 Oct 23 23:07 UTC | 09 Oct 23 23:07 UTC |
	|                | image ls --format json               |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| image          | functional-634060                    | functional-634060           | jenkins | v1.31.2 | 09 Oct 23 23:07 UTC | 09 Oct 23 23:07 UTC |
	|                | image ls --format table              |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	| delete         | -p functional-634060                 | functional-634060           | jenkins | v1.31.2 | 09 Oct 23 23:07 UTC | 09 Oct 23 23:07 UTC |
	| start          | -p ingress-addon-legacy-789037       | ingress-addon-legacy-789037 | jenkins | v1.31.2 | 09 Oct 23 23:07 UTC | 09 Oct 23 23:08 UTC |
	|                | --kubernetes-version=v1.18.20        |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true            |                             |         |         |                     |                     |
	|                | --alsologtostderr                    |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                 |                             |         |         |                     |                     |
	|                | --container-runtime=crio             |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-789037          | ingress-addon-legacy-789037 | jenkins | v1.31.2 | 09 Oct 23 23:08 UTC | 09 Oct 23 23:09 UTC |
	|                | addons enable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-789037          | ingress-addon-legacy-789037 | jenkins | v1.31.2 | 09 Oct 23 23:09 UTC | 09 Oct 23 23:09 UTC |
	|                | addons enable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5               |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-789037          | ingress-addon-legacy-789037 | jenkins | v1.31.2 | 09 Oct 23 23:09 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/        |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'         |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-789037 ip       | ingress-addon-legacy-789037 | jenkins | v1.31.2 | 09 Oct 23 23:11 UTC | 09 Oct 23 23:11 UTC |
	| addons         | ingress-addon-legacy-789037          | ingress-addon-legacy-789037 | jenkins | v1.31.2 | 09 Oct 23 23:11 UTC | 09 Oct 23 23:11 UTC |
	|                | addons disable ingress-dns           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-789037          | ingress-addon-legacy-789037 | jenkins | v1.31.2 | 09 Oct 23 23:11 UTC | 09 Oct 23 23:12 UTC |
	|                | addons disable ingress               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                             |         |         |                     |                     |
	|----------------|--------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/09 23:07:19
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 23:07:19.596248 1572437 out.go:296] Setting OutFile to fd 1 ...
	I1009 23:07:19.596499 1572437 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:07:19.596528 1572437 out.go:309] Setting ErrFile to fd 2...
	I1009 23:07:19.596547 1572437 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:07:19.596834 1572437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-1537865/.minikube/bin
	I1009 23:07:19.597291 1572437 out.go:303] Setting JSON to false
	I1009 23:07:19.598838 1572437 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":24583,"bootTime":1696868257,"procs":573,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1009 23:07:19.598946 1572437 start.go:138] virtualization:  
	I1009 23:07:19.602152 1572437 out.go:177] * [ingress-addon-legacy-789037] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1009 23:07:19.605541 1572437 out.go:177]   - MINIKUBE_LOCATION=17375
	I1009 23:07:19.607994 1572437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 23:07:19.605692 1572437 notify.go:220] Checking for updates...
	I1009 23:07:19.610814 1572437 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig
	I1009 23:07:19.613034 1572437 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	I1009 23:07:19.615490 1572437 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 23:07:19.617682 1572437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 23:07:19.620027 1572437 driver.go:378] Setting default libvirt URI to qemu:///system
	I1009 23:07:19.644591 1572437 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1009 23:07:19.644687 1572437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 23:07:19.730355 1572437 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-10-09 23:07:19.720599178 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1009 23:07:19.730459 1572437 docker.go:295] overlay module found
	I1009 23:07:19.734190 1572437 out.go:177] * Using the docker driver based on user configuration
	I1009 23:07:19.736174 1572437 start.go:298] selected driver: docker
	I1009 23:07:19.736192 1572437 start.go:902] validating driver "docker" against <nil>
	I1009 23:07:19.736213 1572437 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 23:07:19.736905 1572437 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 23:07:19.814263 1572437 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-10-09 23:07:19.804555126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1009 23:07:19.814426 1572437 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1009 23:07:19.814668 1572437 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 23:07:19.816843 1572437 out.go:177] * Using Docker driver with root privileges
	I1009 23:07:19.819083 1572437 cni.go:84] Creating CNI manager for ""
	I1009 23:07:19.819103 1572437 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 23:07:19.819132 1572437 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 23:07:19.819143 1572437 start_flags.go:323] config:
	{Name:ingress-addon-legacy-789037 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-789037 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 23:07:19.821660 1572437 out.go:177] * Starting control plane node ingress-addon-legacy-789037 in cluster ingress-addon-legacy-789037
	I1009 23:07:19.823490 1572437 cache.go:122] Beginning downloading kic base image for docker with crio
	I1009 23:07:19.825632 1572437 out.go:177] * Pulling base image ...
	I1009 23:07:19.827811 1572437 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1009 23:07:19.827992 1572437 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1009 23:07:19.846044 1572437 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1009 23:07:19.846070 1572437 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1009 23:07:20.009782 1572437 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1009 23:07:20.009806 1572437 cache.go:57] Caching tarball of preloaded images
	I1009 23:07:20.009998 1572437 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1009 23:07:20.020417 1572437 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1009 23:07:20.022675 1572437 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1009 23:07:20.142322 1572437 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1009 23:07:33.791847 1572437 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1009 23:07:33.791960 1572437 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1009 23:07:34.994197 1572437 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1009 23:07:34.994573 1572437 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/config.json ...
	I1009 23:07:34.994606 1572437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/config.json: {Name:mkeaac38bd0f3b0e7ec0521dc3ba15c95b3877a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:07:34.994793 1572437 cache.go:195] Successfully downloaded all kic artifacts
	I1009 23:07:34.994818 1572437 start.go:365] acquiring machines lock for ingress-addon-legacy-789037: {Name:mk0e8141e7b8683ea05555a9905ab9e7f448671e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:07:34.994879 1572437 start.go:369] acquired machines lock for "ingress-addon-legacy-789037" in 45.194µs
	I1009 23:07:34.994900 1572437 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-789037 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-789037 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 23:07:34.994980 1572437 start.go:125] createHost starting for "" (driver="docker")
	I1009 23:07:34.997486 1572437 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1009 23:07:34.997767 1572437 start.go:159] libmachine.API.Create for "ingress-addon-legacy-789037" (driver="docker")
	I1009 23:07:34.997815 1572437 client.go:168] LocalClient.Create starting
	I1009 23:07:34.997892 1572437 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem
	I1009 23:07:34.997937 1572437 main.go:141] libmachine: Decoding PEM data...
	I1009 23:07:34.997958 1572437 main.go:141] libmachine: Parsing certificate...
	I1009 23:07:34.998022 1572437 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem
	I1009 23:07:34.998044 1572437 main.go:141] libmachine: Decoding PEM data...
	I1009 23:07:34.998061 1572437 main.go:141] libmachine: Parsing certificate...
	I1009 23:07:34.998432 1572437 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-789037 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 23:07:35.033373 1572437 cli_runner.go:211] docker network inspect ingress-addon-legacy-789037 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 23:07:35.033468 1572437 network_create.go:281] running [docker network inspect ingress-addon-legacy-789037] to gather additional debugging logs...
	I1009 23:07:35.033497 1572437 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-789037
	W1009 23:07:35.052272 1572437 cli_runner.go:211] docker network inspect ingress-addon-legacy-789037 returned with exit code 1
	I1009 23:07:35.052310 1572437 network_create.go:284] error running [docker network inspect ingress-addon-legacy-789037]: docker network inspect ingress-addon-legacy-789037: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-789037 not found
	I1009 23:07:35.052326 1572437 network_create.go:286] output of [docker network inspect ingress-addon-legacy-789037]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-789037 not found
	
	** /stderr **
	I1009 23:07:35.052453 1572437 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 23:07:35.070652 1572437 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000582040}
	I1009 23:07:35.070694 1572437 network_create.go:124] attempt to create docker network ingress-addon-legacy-789037 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 23:07:35.070755 1572437 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-789037 ingress-addon-legacy-789037
	I1009 23:07:35.156552 1572437 network_create.go:108] docker network ingress-addon-legacy-789037 192.168.49.0/24 created
	I1009 23:07:35.156583 1572437 kic.go:118] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-789037" container
	I1009 23:07:35.156662 1572437 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 23:07:35.174690 1572437 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-789037 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-789037 --label created_by.minikube.sigs.k8s.io=true
	I1009 23:07:35.194546 1572437 oci.go:103] Successfully created a docker volume ingress-addon-legacy-789037
	I1009 23:07:35.194644 1572437 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-789037-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-789037 --entrypoint /usr/bin/test -v ingress-addon-legacy-789037:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1009 23:07:36.693812 1572437 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-789037-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-789037 --entrypoint /usr/bin/test -v ingress-addon-legacy-789037:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib: (1.499128217s)
	I1009 23:07:36.693844 1572437 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-789037
	I1009 23:07:36.693871 1572437 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1009 23:07:36.693892 1572437 kic.go:191] Starting extracting preloaded images to volume ...
	I1009 23:07:36.693975 1572437 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-789037:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 23:07:41.594297 1572437 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-789037:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.900256804s)
	I1009 23:07:41.594330 1572437 kic.go:200] duration metric: took 4.900435 seconds to extract preloaded images to volume
	W1009 23:07:41.594475 1572437 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 23:07:41.594595 1572437 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 23:07:41.667910 1572437 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-789037 --name ingress-addon-legacy-789037 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-789037 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-789037 --network ingress-addon-legacy-789037 --ip 192.168.49.2 --volume ingress-addon-legacy-789037:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1009 23:07:42.066653 1572437 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-789037 --format={{.State.Running}}
	I1009 23:07:42.132493 1572437 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-789037 --format={{.State.Status}}
	I1009 23:07:42.166929 1572437 cli_runner.go:164] Run: docker exec ingress-addon-legacy-789037 stat /var/lib/dpkg/alternatives/iptables
	I1009 23:07:42.259804 1572437 oci.go:144] the created container "ingress-addon-legacy-789037" has a running status.
	I1009 23:07:42.259832 1572437 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/ingress-addon-legacy-789037/id_rsa...
	I1009 23:07:43.018843 1572437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/ingress-addon-legacy-789037/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 23:07:43.018930 1572437 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/ingress-addon-legacy-789037/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 23:07:43.046788 1572437 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-789037 --format={{.State.Status}}
	I1009 23:07:43.073855 1572437 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 23:07:43.073876 1572437 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-789037 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 23:07:43.150698 1572437 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-789037 --format={{.State.Status}}
	I1009 23:07:43.171550 1572437 machine.go:88] provisioning docker machine ...
	I1009 23:07:43.171583 1572437 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-789037"
	I1009 23:07:43.171653 1572437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-789037
	I1009 23:07:43.209784 1572437 main.go:141] libmachine: Using SSH client type: native
	I1009 23:07:43.210234 1572437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34374 <nil> <nil>}
	I1009 23:07:43.210254 1572437 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-789037 && echo "ingress-addon-legacy-789037" | sudo tee /etc/hostname
	I1009 23:07:43.381915 1572437 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-789037
	
	I1009 23:07:43.382004 1572437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-789037
	I1009 23:07:43.406977 1572437 main.go:141] libmachine: Using SSH client type: native
	I1009 23:07:43.407540 1572437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34374 <nil> <nil>}
	I1009 23:07:43.407563 1572437 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-789037' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-789037/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-789037' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 23:07:43.544432 1572437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 23:07:43.544458 1572437 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17375-1537865/.minikube CaCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17375-1537865/.minikube}
	I1009 23:07:43.544480 1572437 ubuntu.go:177] setting up certificates
	I1009 23:07:43.544489 1572437 provision.go:83] configureAuth start
	I1009 23:07:43.544550 1572437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-789037
	I1009 23:07:43.563374 1572437 provision.go:138] copyHostCerts
	I1009 23:07:43.563429 1572437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem
	I1009 23:07:43.563472 1572437 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem, removing ...
	I1009 23:07:43.563484 1572437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem
	I1009 23:07:43.563571 1572437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem (1679 bytes)
	I1009 23:07:43.563666 1572437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem
	I1009 23:07:43.563690 1572437 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem, removing ...
	I1009 23:07:43.563695 1572437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem
	I1009 23:07:43.563732 1572437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem (1078 bytes)
	I1009 23:07:43.563784 1572437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem
	I1009 23:07:43.563806 1572437 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem, removing ...
	I1009 23:07:43.563814 1572437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem
	I1009 23:07:43.563847 1572437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem (1123 bytes)
	I1009 23:07:43.563909 1572437 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-789037 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-789037]
	I1009 23:07:43.929655 1572437 provision.go:172] copyRemoteCerts
	I1009 23:07:43.929728 1572437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 23:07:43.929771 1572437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-789037
	I1009 23:07:43.948097 1572437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34374 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/ingress-addon-legacy-789037/id_rsa Username:docker}
	I1009 23:07:44.046906 1572437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 23:07:44.046976 1572437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 23:07:44.078274 1572437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 23:07:44.078339 1572437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 23:07:44.108544 1572437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 23:07:44.108663 1572437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1009 23:07:44.139525 1572437 provision.go:86] duration metric: configureAuth took 595.019442ms
	I1009 23:07:44.139590 1572437 ubuntu.go:193] setting minikube options for container-runtime
	I1009 23:07:44.139810 1572437 config.go:182] Loaded profile config "ingress-addon-legacy-789037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1009 23:07:44.139934 1572437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-789037
	I1009 23:07:44.158189 1572437 main.go:141] libmachine: Using SSH client type: native
	I1009 23:07:44.158615 1572437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34374 <nil> <nil>}
	I1009 23:07:44.158637 1572437 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 23:07:44.431176 1572437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 23:07:44.431198 1572437 machine.go:91] provisioned docker machine in 1.259626766s
	I1009 23:07:44.431209 1572437 client.go:171] LocalClient.Create took 9.433385403s
	I1009 23:07:44.431222 1572437 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-789037" took 9.433456977s
	I1009 23:07:44.431230 1572437 start.go:300] post-start starting for "ingress-addon-legacy-789037" (driver="docker")
	I1009 23:07:44.431240 1572437 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 23:07:44.431322 1572437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 23:07:44.431361 1572437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-789037
	I1009 23:07:44.450077 1572437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34374 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/ingress-addon-legacy-789037/id_rsa Username:docker}
	I1009 23:07:44.550546 1572437 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 23:07:44.554619 1572437 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 23:07:44.554653 1572437 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1009 23:07:44.554664 1572437 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1009 23:07:44.554672 1572437 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1009 23:07:44.554683 1572437 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-1537865/.minikube/addons for local assets ...
	I1009 23:07:44.554746 1572437 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-1537865/.minikube/files for local assets ...
	I1009 23:07:44.554834 1572437 filesync.go:149] local asset: /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem -> 15432152.pem in /etc/ssl/certs
	I1009 23:07:44.554845 1572437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem -> /etc/ssl/certs/15432152.pem
	I1009 23:07:44.554955 1572437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 23:07:44.565724 1572437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem --> /etc/ssl/certs/15432152.pem (1708 bytes)
	I1009 23:07:44.594723 1572437 start.go:303] post-start completed in 163.478504ms
	I1009 23:07:44.595105 1572437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-789037
	I1009 23:07:44.613499 1572437 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/config.json ...
	I1009 23:07:44.613851 1572437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 23:07:44.613899 1572437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-789037
	I1009 23:07:44.632268 1572437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34374 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/ingress-addon-legacy-789037/id_rsa Username:docker}
	I1009 23:07:44.725184 1572437 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 23:07:44.730957 1572437 start.go:128] duration metric: createHost completed in 9.73595944s
	I1009 23:07:44.730981 1572437 start.go:83] releasing machines lock for "ingress-addon-legacy-789037", held for 9.73609128s
	I1009 23:07:44.731063 1572437 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-789037
	I1009 23:07:44.749300 1572437 ssh_runner.go:195] Run: cat /version.json
	I1009 23:07:44.749359 1572437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-789037
	I1009 23:07:44.749649 1572437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 23:07:44.749716 1572437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-789037
	I1009 23:07:44.771469 1572437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34374 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/ingress-addon-legacy-789037/id_rsa Username:docker}
	I1009 23:07:44.780719 1572437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34374 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/ingress-addon-legacy-789037/id_rsa Username:docker}
	I1009 23:07:45.009218 1572437 ssh_runner.go:195] Run: systemctl --version
	I1009 23:07:45.050300 1572437 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 23:07:45.255487 1572437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 23:07:45.274843 1572437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 23:07:45.314885 1572437 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1009 23:07:45.315018 1572437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 23:07:45.374684 1572437 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1009 23:07:45.374713 1572437 start.go:472] detecting cgroup driver to use...
	I1009 23:07:45.374752 1572437 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1009 23:07:45.374830 1572437 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 23:07:45.400853 1572437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 23:07:45.420104 1572437 docker.go:198] disabling cri-docker service (if available) ...
	I1009 23:07:45.420250 1572437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 23:07:45.440858 1572437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 23:07:45.463734 1572437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 23:07:45.583704 1572437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 23:07:45.694830 1572437 docker.go:214] disabling docker service ...
	I1009 23:07:45.694895 1572437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 23:07:45.717230 1572437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 23:07:45.731326 1572437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 23:07:45.822772 1572437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 23:07:45.920228 1572437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 23:07:45.934404 1572437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 23:07:45.954233 1572437 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1009 23:07:45.954362 1572437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 23:07:45.971080 1572437 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 23:07:45.971247 1572437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 23:07:45.983558 1572437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 23:07:45.995908 1572437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 23:07:46.014081 1572437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 23:07:46.026458 1572437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 23:07:46.037490 1572437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 23:07:46.047956 1572437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:07:46.153822 1572437 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 23:07:46.272872 1572437 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 23:07:46.272979 1572437 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 23:07:46.280443 1572437 start.go:540] Will wait 60s for crictl version
	I1009 23:07:46.280552 1572437 ssh_runner.go:195] Run: which crictl
	I1009 23:07:46.285339 1572437 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 23:07:46.331468 1572437 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1009 23:07:46.331553 1572437 ssh_runner.go:195] Run: crio --version
	I1009 23:07:46.373742 1572437 ssh_runner.go:195] Run: crio --version
	I1009 23:07:46.422705 1572437 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1009 23:07:46.424902 1572437 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-789037 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 23:07:46.443166 1572437 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 23:07:46.447926 1572437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 23:07:46.461283 1572437 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1009 23:07:46.461361 1572437 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 23:07:46.514841 1572437 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1009 23:07:46.514913 1572437 ssh_runner.go:195] Run: which lz4
	I1009 23:07:46.519466 1572437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1009 23:07:46.519563 1572437 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1009 23:07:46.523975 1572437 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1009 23:07:46.524015 1572437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1009 23:07:48.809976 1572437 crio.go:444] Took 2.290425 seconds to copy over tarball
	I1009 23:07:48.810087 1572437 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1009 23:07:51.578974 1572437 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.76885304s)
	I1009 23:07:51.579025 1572437 crio.go:451] Took 2.769012 seconds to extract the tarball
	I1009 23:07:51.579036 1572437 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1009 23:07:51.902864 1572437 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 23:07:51.943802 1572437 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1009 23:07:51.943826 1572437 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1009 23:07:51.943864 1572437 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 23:07:51.944091 1572437 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1009 23:07:51.944200 1572437 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1009 23:07:51.944279 1572437 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1009 23:07:51.944357 1572437 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1009 23:07:51.944429 1572437 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1009 23:07:51.944507 1572437 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1009 23:07:51.944584 1572437 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1009 23:07:51.945597 1572437 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1009 23:07:51.946040 1572437 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1009 23:07:51.946203 1572437 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1009 23:07:51.946326 1572437 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1009 23:07:51.946427 1572437 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1009 23:07:51.946560 1572437 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1009 23:07:51.946613 1572437 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 23:07:51.946676 1572437 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1009 23:07:52.354077 1572437 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1009 23:07:52.367604 1572437 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1009 23:07:52.367890 1572437 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W1009 23:07:52.368099 1572437 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1009 23:07:52.368218 1572437 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W1009 23:07:52.393906 1572437 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1009 23:07:52.394164 1572437 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W1009 23:07:52.403261 1572437 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1009 23:07:52.403522 1572437 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1009 23:07:52.422097 1572437 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1009 23:07:52.422213 1572437 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1009 23:07:52.422284 1572437 ssh_runner.go:195] Run: which crictl
	W1009 23:07:52.436425 1572437 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1009 23:07:52.436647 1572437 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W1009 23:07:52.459453 1572437 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1009 23:07:52.459681 1572437 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1009 23:07:52.519022 1572437 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1009 23:07:52.519157 1572437 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1009 23:07:52.519229 1572437 ssh_runner.go:195] Run: which crictl
	I1009 23:07:52.522456 1572437 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1009 23:07:52.522546 1572437 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1009 23:07:52.522608 1572437 ssh_runner.go:195] Run: which crictl
	I1009 23:07:52.522707 1572437 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1009 23:07:52.522749 1572437 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1009 23:07:52.522802 1572437 ssh_runner.go:195] Run: which crictl
	I1009 23:07:52.587326 1572437 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1009 23:07:52.587531 1572437 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1009 23:07:52.587588 1572437 ssh_runner.go:195] Run: which crictl
	I1009 23:07:52.587495 1572437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1009 23:07:52.591254 1572437 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1009 23:07:52.591298 1572437 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1009 23:07:52.591348 1572437 ssh_runner.go:195] Run: which crictl
	I1009 23:07:52.591432 1572437 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1009 23:07:52.591447 1572437 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1009 23:07:52.591471 1572437 ssh_runner.go:195] Run: which crictl
	I1009 23:07:52.591524 1572437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1009 23:07:52.591582 1572437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1009 23:07:52.591630 1572437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	W1009 23:07:52.635105 1572437 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1009 23:07:52.635325 1572437 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 23:07:52.666953 1572437 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1009 23:07:52.667038 1572437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1009 23:07:52.667131 1572437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1009 23:07:52.779075 1572437 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1009 23:07:52.779187 1572437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1009 23:07:52.779293 1572437 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1009 23:07:52.779336 1572437 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1009 23:07:52.891528 1572437 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1009 23:07:52.891575 1572437 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 23:07:52.891652 1572437 ssh_runner.go:195] Run: which crictl
	I1009 23:07:52.891759 1572437 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1009 23:07:52.891804 1572437 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1009 23:07:52.891853 1572437 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1009 23:07:52.896262 1572437 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 23:07:52.964932 1572437 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1009 23:07:52.965008 1572437 cache_images.go:92] LoadImages completed in 1.021169701s
	W1009 23:07:52.965098 1572437 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2: no such file or directory
	I1009 23:07:52.965193 1572437 ssh_runner.go:195] Run: crio config
	I1009 23:07:53.046671 1572437 cni.go:84] Creating CNI manager for ""
	I1009 23:07:53.046736 1572437 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 23:07:53.046788 1572437 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1009 23:07:53.046842 1572437 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-789037 NodeName:ingress-addon-legacy-789037 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1009 23:07:53.047032 1572437 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-789037"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 23:07:53.047170 1572437 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-789037 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-789037 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1009 23:07:53.047263 1572437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1009 23:07:53.059172 1572437 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 23:07:53.059263 1572437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 23:07:53.070367 1572437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1009 23:07:53.092271 1572437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1009 23:07:53.114009 1572437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1009 23:07:53.135713 1572437 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 23:07:53.140514 1572437 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 23:07:53.154136 1572437 certs.go:56] Setting up /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037 for IP: 192.168.49.2
	I1009 23:07:53.154169 1572437 certs.go:190] acquiring lock for shared ca certs: {Name:mk430c21a56d31b4f15423923c65864a3e3a3c9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:07:53.154347 1572437 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.key
	I1009 23:07:53.154403 1572437 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.key
	I1009 23:07:53.154452 1572437 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.key
	I1009 23:07:53.154467 1572437 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt with IP's: []
	I1009 23:07:53.486572 1572437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt ...
	I1009 23:07:53.486607 1572437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: {Name:mk35a620b838a105263e23495203d9a7e5f2f44b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:07:53.486835 1572437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.key ...
	I1009 23:07:53.486851 1572437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.key: {Name:mk32109bbdbdcb68168f12e805c92c1a436ae81a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:07:53.486939 1572437 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/apiserver.key.dd3b5fb2
	I1009 23:07:53.486955 1572437 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1009 23:07:54.641924 1572437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/apiserver.crt.dd3b5fb2 ...
	I1009 23:07:54.641957 1572437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/apiserver.crt.dd3b5fb2: {Name:mk581aa003ff728e62ee71c67187cda3dcb62c51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:07:54.642157 1572437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/apiserver.key.dd3b5fb2 ...
	I1009 23:07:54.642169 1572437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/apiserver.key.dd3b5fb2: {Name:mkb2b424ca74b32b60a2b47a5392310e54b5aa32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:07:54.642266 1572437 certs.go:337] copying /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/apiserver.crt
	I1009 23:07:54.642349 1572437 certs.go:341] copying /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/apiserver.key
	I1009 23:07:54.642410 1572437 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/proxy-client.key
	I1009 23:07:54.642425 1572437 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/proxy-client.crt with IP's: []
	I1009 23:07:55.444838 1572437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/proxy-client.crt ...
	I1009 23:07:55.444869 1572437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/proxy-client.crt: {Name:mk032df6e14937caa77c636b6c6e1089aa6be370 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:07:55.445055 1572437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/proxy-client.key ...
	I1009 23:07:55.445068 1572437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/proxy-client.key: {Name:mkeafc308ceb74a80a67a2628c02e0a7df4f6dd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:07:55.445155 1572437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 23:07:55.445175 1572437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 23:07:55.445187 1572437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 23:07:55.445203 1572437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 23:07:55.445228 1572437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 23:07:55.445244 1572437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 23:07:55.445258 1572437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 23:07:55.445269 1572437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 23:07:55.445324 1572437 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/1543215.pem (1338 bytes)
	W1009 23:07:55.445363 1572437 certs.go:433] ignoring /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/1543215_empty.pem, impossibly tiny 0 bytes
	I1009 23:07:55.445377 1572437 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 23:07:55.445405 1572437 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem (1078 bytes)
	I1009 23:07:55.445433 1572437 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem (1123 bytes)
	I1009 23:07:55.445459 1572437 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem (1679 bytes)
	I1009 23:07:55.445516 1572437 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem (1708 bytes)
	I1009 23:07:55.445546 1572437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem -> /usr/share/ca-certificates/15432152.pem
	I1009 23:07:55.445561 1572437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:07:55.445575 1572437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/1543215.pem -> /usr/share/ca-certificates/1543215.pem
	I1009 23:07:55.446183 1572437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1009 23:07:55.475910 1572437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 23:07:55.504539 1572437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 23:07:55.533515 1572437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 23:07:55.562162 1572437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 23:07:55.590902 1572437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 23:07:55.619908 1572437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 23:07:55.648311 1572437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 23:07:55.676481 1572437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem --> /usr/share/ca-certificates/15432152.pem (1708 bytes)
	I1009 23:07:55.704987 1572437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 23:07:55.737064 1572437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/1543215.pem --> /usr/share/ca-certificates/1543215.pem (1338 bytes)
	I1009 23:07:55.766173 1572437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 23:07:55.786818 1572437 ssh_runner.go:195] Run: openssl version
	I1009 23:07:55.793807 1572437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15432152.pem && ln -fs /usr/share/ca-certificates/15432152.pem /etc/ssl/certs/15432152.pem"
	I1009 23:07:55.805953 1572437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15432152.pem
	I1009 23:07:55.810819 1572437 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  9 23:03 /usr/share/ca-certificates/15432152.pem
	I1009 23:07:55.810944 1572437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15432152.pem
	I1009 23:07:55.819439 1572437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15432152.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 23:07:55.830996 1572437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 23:07:55.842998 1572437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:07:55.847546 1572437 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  9 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:07:55.847611 1572437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:07:55.856102 1572437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 23:07:55.867940 1572437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1543215.pem && ln -fs /usr/share/ca-certificates/1543215.pem /etc/ssl/certs/1543215.pem"
	I1009 23:07:55.879346 1572437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1543215.pem
	I1009 23:07:55.883775 1572437 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  9 23:03 /usr/share/ca-certificates/1543215.pem
	I1009 23:07:55.883859 1572437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1543215.pem
	I1009 23:07:55.892393 1572437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1543215.pem /etc/ssl/certs/51391683.0"
	I1009 23:07:55.904171 1572437 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1009 23:07:55.908850 1572437 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1009 23:07:55.908923 1572437 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-789037 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-789037 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 23:07:55.909029 1572437 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 23:07:55.909100 1572437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 23:07:55.954841 1572437 cri.go:89] found id: ""
	I1009 23:07:55.954962 1572437 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 23:07:55.965882 1572437 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 23:07:55.976919 1572437 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1009 23:07:55.976985 1572437 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 23:07:55.987693 1572437 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 23:07:55.987765 1572437 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 23:07:56.044976 1572437 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1009 23:07:56.045361 1572437 kubeadm.go:322] [preflight] Running pre-flight checks
	I1009 23:07:56.100772 1572437 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1009 23:07:56.100858 1572437 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-aws
	I1009 23:07:56.100899 1572437 kubeadm.go:322] OS: Linux
	I1009 23:07:56.100946 1572437 kubeadm.go:322] CGROUPS_CPU: enabled
	I1009 23:07:56.100995 1572437 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1009 23:07:56.101043 1572437 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1009 23:07:56.101092 1572437 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1009 23:07:56.101159 1572437 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1009 23:07:56.101212 1572437 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1009 23:07:56.191375 1572437 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 23:07:56.191556 1572437 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 23:07:56.191698 1572437 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 23:07:56.429176 1572437 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 23:07:56.430747 1572437 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 23:07:56.431057 1572437 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1009 23:07:56.534230 1572437 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 23:07:56.538996 1572437 out.go:204]   - Generating certificates and keys ...
	I1009 23:07:56.539163 1572437 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1009 23:07:56.539250 1572437 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1009 23:07:57.139489 1572437 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 23:07:57.748740 1572437 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1009 23:07:58.170817 1572437 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1009 23:07:58.616467 1572437 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1009 23:07:59.318430 1572437 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1009 23:07:59.318808 1572437 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-789037 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 23:07:59.586806 1572437 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1009 23:07:59.587258 1572437 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-789037 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 23:08:00.814386 1572437 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 23:08:01.815144 1572437 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 23:08:02.441208 1572437 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1009 23:08:02.441563 1572437 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 23:08:02.853172 1572437 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 23:08:03.080407 1572437 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 23:08:03.440367 1572437 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 23:08:03.752604 1572437 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 23:08:03.753629 1572437 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 23:08:03.756577 1572437 out.go:204]   - Booting up control plane ...
	I1009 23:08:03.756682 1572437 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 23:08:03.767526 1572437 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 23:08:03.771528 1572437 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 23:08:03.771627 1572437 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 23:08:03.773433 1572437 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 23:08:16.277516 1572437 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.502750 seconds
	I1009 23:08:16.277629 1572437 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 23:08:16.292454 1572437 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 23:08:16.812417 1572437 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 23:08:16.812556 1572437 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-789037 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1009 23:08:17.320305 1572437 kubeadm.go:322] [bootstrap-token] Using token: 3j6fz9.ugkqe9bcpd6iddng
	I1009 23:08:17.324129 1572437 out.go:204]   - Configuring RBAC rules ...
	I1009 23:08:17.324255 1572437 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 23:08:17.329093 1572437 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 23:08:17.344269 1572437 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 23:08:17.347304 1572437 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 23:08:17.350717 1572437 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 23:08:17.355014 1572437 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 23:08:17.366297 1572437 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 23:08:17.733998 1572437 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1009 23:08:17.793958 1572437 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1009 23:08:17.795216 1572437 kubeadm.go:322] 
	I1009 23:08:17.795282 1572437 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1009 23:08:17.795292 1572437 kubeadm.go:322] 
	I1009 23:08:17.795364 1572437 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1009 23:08:17.795373 1572437 kubeadm.go:322] 
	I1009 23:08:17.795398 1572437 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1009 23:08:17.795456 1572437 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 23:08:17.795507 1572437 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 23:08:17.795515 1572437 kubeadm.go:322] 
	I1009 23:08:17.795564 1572437 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1009 23:08:17.795645 1572437 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 23:08:17.795713 1572437 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 23:08:17.795721 1572437 kubeadm.go:322] 
	I1009 23:08:17.795799 1572437 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 23:08:17.795874 1572437 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1009 23:08:17.795884 1572437 kubeadm.go:322] 
	I1009 23:08:17.795963 1572437 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3j6fz9.ugkqe9bcpd6iddng \
	I1009 23:08:17.796065 1572437 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e2aebf53348f507bad0adab8a765b229b70810954e22f1e7a919941009267e3f \
	I1009 23:08:17.796089 1572437 kubeadm.go:322]     --control-plane 
	I1009 23:08:17.796102 1572437 kubeadm.go:322] 
	I1009 23:08:17.796192 1572437 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1009 23:08:17.796201 1572437 kubeadm.go:322] 
	I1009 23:08:17.796277 1572437 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3j6fz9.ugkqe9bcpd6iddng \
	I1009 23:08:17.796380 1572437 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:e2aebf53348f507bad0adab8a765b229b70810954e22f1e7a919941009267e3f 
	I1009 23:08:17.799105 1572437 kubeadm.go:322] W1009 23:07:56.044045    1236 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1009 23:08:17.799337 1572437 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1009 23:08:17.799442 1572437 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 23:08:17.799563 1572437 kubeadm.go:322] W1009 23:08:03.766516    1236 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1009 23:08:17.799682 1572437 kubeadm.go:322] W1009 23:08:03.768403    1236 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1009 23:08:17.799699 1572437 cni.go:84] Creating CNI manager for ""
	I1009 23:08:17.799713 1572437 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 23:08:17.802047 1572437 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1009 23:08:17.804125 1572437 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 23:08:17.809422 1572437 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1009 23:08:17.809446 1572437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1009 23:08:17.836069 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 23:08:18.332053 1572437 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 23:08:18.332139 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:18.332192 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90 minikube.k8s.io/name=ingress-addon-legacy-789037 minikube.k8s.io/updated_at=2023_10_09T23_08_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:18.502897 1572437 ops.go:34] apiserver oom_adj: -16
	I1009 23:08:18.502994 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:18.600231 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:19.196295 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:19.696721 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:20.195834 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:20.696596 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:21.195821 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:21.695837 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:22.196031 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:22.696194 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:23.196470 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:23.696398 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:24.195730 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:24.696637 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:25.195860 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:25.696523 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:26.196647 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:26.696378 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:27.196297 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:27.696698 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:28.196435 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:28.696535 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:29.196592 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:29.695981 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:30.196703 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:30.696072 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:31.195811 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:31.696741 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:32.196225 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:32.696392 1572437 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:08:32.816759 1572437 kubeadm.go:1081] duration metric: took 14.484691169s to wait for elevateKubeSystemPrivileges.
	I1009 23:08:32.816786 1572437 kubeadm.go:406] StartCluster complete in 36.907867216s
	I1009 23:08:32.816806 1572437 settings.go:142] acquiring lock: {Name:mkeeac28244e9503bae3d91ba3a5c4a3392545f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:08:32.816863 1572437 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17375-1537865/kubeconfig
	I1009 23:08:32.817614 1572437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/kubeconfig: {Name:mk913f33f2148d9a5b250c16fc9df0a8782f9275 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:08:32.818340 1572437 kapi.go:59] client config for ingress-addon-legacy-789037: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt", KeyFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.key", CAFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b67c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 23:08:32.819510 1572437 config.go:182] Loaded profile config "ingress-addon-legacy-789037": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1009 23:08:32.819569 1572437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 23:08:32.819749 1572437 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1009 23:08:32.819856 1572437 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-789037"
	I1009 23:08:32.819872 1572437 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-789037"
	I1009 23:08:32.819986 1572437 host.go:66] Checking if "ingress-addon-legacy-789037" exists ...
	I1009 23:08:32.820550 1572437 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-789037 --format={{.State.Status}}
	I1009 23:08:32.821394 1572437 cert_rotation.go:137] Starting client certificate rotation controller
	I1009 23:08:32.821979 1572437 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-789037"
	I1009 23:08:32.822026 1572437 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-789037"
	I1009 23:08:32.822397 1572437 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-789037 --format={{.State.Status}}
	I1009 23:08:32.871381 1572437 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 23:08:32.875839 1572437 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 23:08:32.875861 1572437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 23:08:32.875913 1572437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-789037
	I1009 23:08:32.875690 1572437 kapi.go:59] client config for ingress-addon-legacy-789037: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt", KeyFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.key", CAFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b67c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 23:08:32.884656 1572437 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-789037"
	I1009 23:08:32.884726 1572437 host.go:66] Checking if "ingress-addon-legacy-789037" exists ...
	I1009 23:08:32.885220 1572437 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-789037 --format={{.State.Status}}
	I1009 23:08:32.931741 1572437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34374 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/ingress-addon-legacy-789037/id_rsa Username:docker}
	I1009 23:08:32.958700 1572437 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 23:08:32.958723 1572437 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 23:08:32.958796 1572437 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-789037
	I1009 23:08:32.992173 1572437 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34374 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/ingress-addon-legacy-789037/id_rsa Username:docker}
	I1009 23:08:32.998414 1572437 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-789037" context rescaled to 1 replicas
	I1009 23:08:32.998456 1572437 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 23:08:33.005071 1572437 out.go:177] * Verifying Kubernetes components...
	I1009 23:08:33.007598 1572437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 23:08:33.110544 1572437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 23:08:33.222444 1572437 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 23:08:33.223230 1572437 kapi.go:59] client config for ingress-addon-legacy-789037: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt", KeyFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.key", CAFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b67c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 23:08:33.223572 1572437 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-789037" to be "Ready" ...
	I1009 23:08:33.285864 1572437 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 23:08:33.990712 1572437 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1009 23:08:34.077193 1572437 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1009 23:08:34.079480 1572437 addons.go:502] enable addons completed in 1.259712317s: enabled=[storage-provisioner default-storageclass]
	I1009 23:08:35.242579 1572437 node_ready.go:58] node "ingress-addon-legacy-789037" has status "Ready":"False"
	I1009 23:08:37.742176 1572437 node_ready.go:58] node "ingress-addon-legacy-789037" has status "Ready":"False"
	I1009 23:08:40.241674 1572437 node_ready.go:58] node "ingress-addon-legacy-789037" has status "Ready":"False"
	I1009 23:08:41.240772 1572437 node_ready.go:49] node "ingress-addon-legacy-789037" has status "Ready":"True"
	I1009 23:08:41.240802 1572437 node_ready.go:38] duration metric: took 8.017189424s waiting for node "ingress-addon-legacy-789037" to be "Ready" ...
	I1009 23:08:41.240812 1572437 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 23:08:41.247853 1572437 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-9nvfw" in "kube-system" namespace to be "Ready" ...
	I1009 23:08:43.256182 1572437 pod_ready.go:102] pod "coredns-66bff467f8-9nvfw" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-09 23:08:32 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1009 23:08:45.294084 1572437 pod_ready.go:102] pod "coredns-66bff467f8-9nvfw" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-10-09 23:08:32 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1009 23:08:47.759332 1572437 pod_ready.go:102] pod "coredns-66bff467f8-9nvfw" in "kube-system" namespace has status "Ready":"False"
	I1009 23:08:49.759656 1572437 pod_ready.go:102] pod "coredns-66bff467f8-9nvfw" in "kube-system" namespace has status "Ready":"False"
	I1009 23:08:52.258101 1572437 pod_ready.go:102] pod "coredns-66bff467f8-9nvfw" in "kube-system" namespace has status "Ready":"False"
	I1009 23:08:54.259066 1572437 pod_ready.go:102] pod "coredns-66bff467f8-9nvfw" in "kube-system" namespace has status "Ready":"False"
	I1009 23:08:56.758213 1572437 pod_ready.go:92] pod "coredns-66bff467f8-9nvfw" in "kube-system" namespace has status "Ready":"True"
	I1009 23:08:56.758239 1572437 pod_ready.go:81] duration metric: took 15.510354184s waiting for pod "coredns-66bff467f8-9nvfw" in "kube-system" namespace to be "Ready" ...
	I1009 23:08:56.758252 1572437 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-789037" in "kube-system" namespace to be "Ready" ...
	I1009 23:08:56.763348 1572437 pod_ready.go:92] pod "etcd-ingress-addon-legacy-789037" in "kube-system" namespace has status "Ready":"True"
	I1009 23:08:56.763373 1572437 pod_ready.go:81] duration metric: took 5.095499ms waiting for pod "etcd-ingress-addon-legacy-789037" in "kube-system" namespace to be "Ready" ...
	I1009 23:08:56.763389 1572437 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-789037" in "kube-system" namespace to be "Ready" ...
	I1009 23:08:56.768193 1572437 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-789037" in "kube-system" namespace has status "Ready":"True"
	I1009 23:08:56.768221 1572437 pod_ready.go:81] duration metric: took 4.824196ms waiting for pod "kube-apiserver-ingress-addon-legacy-789037" in "kube-system" namespace to be "Ready" ...
	I1009 23:08:56.768238 1572437 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-789037" in "kube-system" namespace to be "Ready" ...
	I1009 23:08:56.773411 1572437 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-789037" in "kube-system" namespace has status "Ready":"True"
	I1009 23:08:56.773437 1572437 pod_ready.go:81] duration metric: took 5.19026ms waiting for pod "kube-controller-manager-ingress-addon-legacy-789037" in "kube-system" namespace to be "Ready" ...
	I1009 23:08:56.773450 1572437 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nsqnq" in "kube-system" namespace to be "Ready" ...
	I1009 23:08:56.778132 1572437 pod_ready.go:92] pod "kube-proxy-nsqnq" in "kube-system" namespace has status "Ready":"True"
	I1009 23:08:56.778152 1572437 pod_ready.go:81] duration metric: took 4.695022ms waiting for pod "kube-proxy-nsqnq" in "kube-system" namespace to be "Ready" ...
	I1009 23:08:56.778162 1572437 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-789037" in "kube-system" namespace to be "Ready" ...
	I1009 23:08:56.953523 1572437 request.go:629] Waited for 175.294007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-789037
	I1009 23:08:57.153604 1572437 request.go:629] Waited for 197.355435ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-789037
	I1009 23:08:57.156668 1572437 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-789037" in "kube-system" namespace has status "Ready":"True"
	I1009 23:08:57.156698 1572437 pod_ready.go:81] duration metric: took 378.526491ms waiting for pod "kube-scheduler-ingress-addon-legacy-789037" in "kube-system" namespace to be "Ready" ...
	I1009 23:08:57.156717 1572437 pod_ready.go:38] duration metric: took 15.915887621s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 23:08:57.156734 1572437 api_server.go:52] waiting for apiserver process to appear ...
	I1009 23:08:57.156794 1572437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 23:08:57.170446 1572437 api_server.go:72] duration metric: took 24.17195564s to wait for apiserver process to appear ...
	I1009 23:08:57.170470 1572437 api_server.go:88] waiting for apiserver healthz status ...
	I1009 23:08:57.170488 1572437 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1009 23:08:57.180047 1572437 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1009 23:08:57.180938 1572437 api_server.go:141] control plane version: v1.18.20
	I1009 23:08:57.180963 1572437 api_server.go:131] duration metric: took 10.486195ms to wait for apiserver health ...
	I1009 23:08:57.180972 1572437 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 23:08:57.353355 1572437 request.go:629] Waited for 172.314379ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1009 23:08:57.359352 1572437 system_pods.go:59] 8 kube-system pods found
	I1009 23:08:57.359387 1572437 system_pods.go:61] "coredns-66bff467f8-9nvfw" [7ce1a4a9-9479-4e56-a9c8-1ee8811a2592] Running
	I1009 23:08:57.359394 1572437 system_pods.go:61] "etcd-ingress-addon-legacy-789037" [6ac5cf85-2e96-46ce-9407-2f2172e091e2] Running
	I1009 23:08:57.359400 1572437 system_pods.go:61] "kindnet-m4l94" [fdb574b2-bafb-48fe-9225-da27e44536cb] Running
	I1009 23:08:57.359426 1572437 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-789037" [5cd40ad9-8a40-4c8d-bb5c-83494d93bfb9] Running
	I1009 23:08:57.359435 1572437 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-789037" [47afc5e9-8281-459c-9207-4f2f88b20945] Running
	I1009 23:08:57.359440 1572437 system_pods.go:61] "kube-proxy-nsqnq" [205d08ce-412d-45c3-8249-e548abf862d8] Running
	I1009 23:08:57.359453 1572437 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-789037" [0f45eee1-334d-4426-bcdb-5866c213e7b2] Running
	I1009 23:08:57.359458 1572437 system_pods.go:61] "storage-provisioner" [606a8ff2-927c-4a8a-bc10-d6b280bdad03] Running
	I1009 23:08:57.359464 1572437 system_pods.go:74] duration metric: took 178.486862ms to wait for pod list to return data ...
	I1009 23:08:57.359476 1572437 default_sa.go:34] waiting for default service account to be created ...
	I1009 23:08:57.553866 1572437 request.go:629] Waited for 194.312578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1009 23:08:57.556270 1572437 default_sa.go:45] found service account: "default"
	I1009 23:08:57.556301 1572437 default_sa.go:55] duration metric: took 196.818564ms for default service account to be created ...
	I1009 23:08:57.556313 1572437 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 23:08:57.753696 1572437 request.go:629] Waited for 197.316206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1009 23:08:57.759592 1572437 system_pods.go:86] 8 kube-system pods found
	I1009 23:08:57.759620 1572437 system_pods.go:89] "coredns-66bff467f8-9nvfw" [7ce1a4a9-9479-4e56-a9c8-1ee8811a2592] Running
	I1009 23:08:57.759627 1572437 system_pods.go:89] "etcd-ingress-addon-legacy-789037" [6ac5cf85-2e96-46ce-9407-2f2172e091e2] Running
	I1009 23:08:57.759633 1572437 system_pods.go:89] "kindnet-m4l94" [fdb574b2-bafb-48fe-9225-da27e44536cb] Running
	I1009 23:08:57.759643 1572437 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-789037" [5cd40ad9-8a40-4c8d-bb5c-83494d93bfb9] Running
	I1009 23:08:57.759649 1572437 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-789037" [47afc5e9-8281-459c-9207-4f2f88b20945] Running
	I1009 23:08:57.759654 1572437 system_pods.go:89] "kube-proxy-nsqnq" [205d08ce-412d-45c3-8249-e548abf862d8] Running
	I1009 23:08:57.759664 1572437 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-789037" [0f45eee1-334d-4426-bcdb-5866c213e7b2] Running
	I1009 23:08:57.759669 1572437 system_pods.go:89] "storage-provisioner" [606a8ff2-927c-4a8a-bc10-d6b280bdad03] Running
	I1009 23:08:57.759678 1572437 system_pods.go:126] duration metric: took 203.360583ms to wait for k8s-apps to be running ...
	I1009 23:08:57.759688 1572437 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 23:08:57.759751 1572437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 23:08:57.773650 1572437 system_svc.go:56] duration metric: took 13.950157ms WaitForService to wait for kubelet.
	I1009 23:08:57.773679 1572437 kubeadm.go:581] duration metric: took 24.775196242s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1009 23:08:57.773698 1572437 node_conditions.go:102] verifying NodePressure condition ...
	I1009 23:08:57.954017 1572437 request.go:629] Waited for 180.230975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1009 23:08:57.956849 1572437 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 23:08:57.956883 1572437 node_conditions.go:123] node cpu capacity is 2
	I1009 23:08:57.956895 1572437 node_conditions.go:105] duration metric: took 183.172867ms to run NodePressure ...
	I1009 23:08:57.956907 1572437 start.go:228] waiting for startup goroutines ...
	I1009 23:08:57.956945 1572437 start.go:233] waiting for cluster config update ...
	I1009 23:08:57.956962 1572437 start.go:242] writing updated cluster config ...
	I1009 23:08:57.957310 1572437 ssh_runner.go:195] Run: rm -f paused
	I1009 23:08:58.034746 1572437 start.go:600] kubectl: 1.28.2, cluster: 1.18.20 (minor skew: 10)
	I1009 23:08:58.037797 1572437 out.go:177] 
	W1009 23:08:58.040018 1572437 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.18.20.
	I1009 23:08:58.042162 1572437 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1009 23:08:58.044271 1572437 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-789037" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 09 23:12:02 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:02.031454925Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=3d485e10-3c57-4b56-860f-ecdea8fe4b50 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 09 23:12:02 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:02.031680665Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9c2e643516106a9eab2bf382e5025f92ec1c15275a7f4315562271033b9356a6],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=3d485e10-3c57-4b56-860f-ecdea8fe4b50 name=/runtime.v1alpha2.ImageService/ImageStatus
	Oct 09 23:12:02 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:02.032441620Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-7hn8j/hello-world-app" id=9df54f8b-0359-4b42-a1e1-859fc66dd6e8 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Oct 09 23:12:02 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:02.032553603Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 09 23:12:02 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:02.143740386Z" level=info msg="Created container 1407e0e4216db7f7cf98d680dd1d2f24c7c91006227060f8a51ab256c9d1304c: default/hello-world-app-5f5d8b66bb-7hn8j/hello-world-app" id=9df54f8b-0359-4b42-a1e1-859fc66dd6e8 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Oct 09 23:12:02 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:02.144674329Z" level=info msg="Starting container: 1407e0e4216db7f7cf98d680dd1d2f24c7c91006227060f8a51ab256c9d1304c" id=08969d06-4df8-471b-bd66-b2c2cc9b23f5 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Oct 09 23:12:02 ingress-addon-legacy-789037 conmon[3726]: conmon 1407e0e4216db7f7cf98 <ninfo>: container 3738 exited with status 1
	Oct 09 23:12:02 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:02.160409591Z" level=info msg="Started container" PID=3738 containerID=1407e0e4216db7f7cf98d680dd1d2f24c7c91006227060f8a51ab256c9d1304c description=default/hello-world-app-5f5d8b66bb-7hn8j/hello-world-app id=08969d06-4df8-471b-bd66-b2c2cc9b23f5 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=02ca473693226f369c9b74b253efe52023eddec2a14458370c94a45b2972e86b
	Oct 09 23:12:02 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:02.603903161Z" level=info msg="Removing container: 936fe200d49d99f6b3c41c16541e70535e1e465fc6fe8eb5c80bce4efa8516f5" id=85b19afe-b960-43cc-a01e-0490805cea77 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Oct 09 23:12:02 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:02.652241555Z" level=info msg="Removed container 936fe200d49d99f6b3c41c16541e70535e1e465fc6fe8eb5c80bce4efa8516f5: default/hello-world-app-5f5d8b66bb-7hn8j/hello-world-app" id=85b19afe-b960-43cc-a01e-0490805cea77 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Oct 09 23:12:02 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:02.858476991Z" level=warning msg="Stopping container 267b4cd526254c46684ebfa61ed218163752171325d481d8013668a40af49015 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=8e15e3f3-fa96-43a5-92e2-dd4eb01833d9 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 09 23:12:02 ingress-addon-legacy-789037 conmon[2741]: conmon 267b4cd526254c46684e <ninfo>: container 2752 exited with status 137
	Oct 09 23:12:03 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:03.063401376Z" level=info msg="Stopped container 267b4cd526254c46684ebfa61ed218163752171325d481d8013668a40af49015: ingress-nginx/ingress-nginx-controller-7fcf777cb7-w8s2n/controller" id=9129253e-ce36-40a3-9605-292e24b0d32c name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 09 23:12:03 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:03.064240132Z" level=info msg="Stopping pod sandbox: d2882aa4177ecb2c48eacdd6cd0f77e7510e8422740ad3a9b9d644ee97f1e4ff" id=8e61deb6-531a-4d07-ba89-87208c6b1462 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 09 23:12:03 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:03.064685909Z" level=info msg="Stopped container 267b4cd526254c46684ebfa61ed218163752171325d481d8013668a40af49015: ingress-nginx/ingress-nginx-controller-7fcf777cb7-w8s2n/controller" id=8e15e3f3-fa96-43a5-92e2-dd4eb01833d9 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Oct 09 23:12:03 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:03.065299466Z" level=info msg="Stopping pod sandbox: d2882aa4177ecb2c48eacdd6cd0f77e7510e8422740ad3a9b9d644ee97f1e4ff" id=a6a9b8b5-a06b-4f5e-9272-68377ea8776a name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 09 23:12:03 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:03.068650624Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-FYV7FVXWEZHKONSV - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-GH3OGL6HTEEO4CEX - [0:0]\n-X KUBE-HP-GH3OGL6HTEEO4CEX\n-X KUBE-HP-FYV7FVXWEZHKONSV\nCOMMIT\n"
	Oct 09 23:12:03 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:03.071091468Z" level=info msg="Closing host port tcp:80"
	Oct 09 23:12:03 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:03.071222398Z" level=info msg="Closing host port tcp:443"
	Oct 09 23:12:03 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:03.072726074Z" level=info msg="Host port tcp:80 does not have an open socket"
	Oct 09 23:12:03 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:03.072763859Z" level=info msg="Host port tcp:443 does not have an open socket"
	Oct 09 23:12:03 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:03.072940615Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-w8s2n Namespace:ingress-nginx ID:d2882aa4177ecb2c48eacdd6cd0f77e7510e8422740ad3a9b9d644ee97f1e4ff UID:475013f1-6771-4f67-ac73-504222e4fac5 NetNS:/var/run/netns/5ffd1f4c-cf32-4eb9-8174-72873c63c57d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 09 23:12:03 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:03.073106769Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-w8s2n from CNI network \"kindnet\" (type=ptp)"
	Oct 09 23:12:03 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:03.109074358Z" level=info msg="Stopped pod sandbox: d2882aa4177ecb2c48eacdd6cd0f77e7510e8422740ad3a9b9d644ee97f1e4ff" id=8e61deb6-531a-4d07-ba89-87208c6b1462 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Oct 09 23:12:03 ingress-addon-legacy-789037 crio[906]: time="2023-10-09 23:12:03.109236902Z" level=info msg="Stopped pod sandbox (already stopped): d2882aa4177ecb2c48eacdd6cd0f77e7510e8422740ad3a9b9d644ee97f1e4ff" id=a6a9b8b5-a06b-4f5e-9272-68377ea8776a name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1407e0e4216db       97e050c3e21e9472ce8eb8fcb7bb8f23063c0b473fe44bdc42246bb01c15cdd4                                                   6 seconds ago       Exited              hello-world-app           2                   02ca473693226       hello-world-app-5f5d8b66bb-7hn8j
	1ee715c3db67f       docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef                    2 minutes ago       Running             nginx                     0                   12cf7c6d79e06       nginx
	267b4cd526254       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   d2882aa4177ec       ingress-nginx-controller-7fcf777cb7-w8s2n
	da1ec32ab5bc0       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   067908729d4ee       ingress-nginx-admission-patch-k9fnz
	76862e9c272e4       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   990fac483076f       ingress-nginx-admission-create-7wxwd
	7fd811fc78f38       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   bcf6d4c056024       storage-provisioner
	75f59a4154fb3       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   35dda82582092       coredns-66bff467f8-9nvfw
	e4dcbd0c5c142       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   ecde108d6be3a       kindnet-m4l94
	06107dc5b93be       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   976d72a05df56       kube-proxy-nsqnq
	0f748a0677244       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   4 minutes ago       Running             kube-apiserver            0                   1604ca6fdfde2       kube-apiserver-ingress-addon-legacy-789037
	835422e9705c9       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   4 minutes ago       Running             kube-controller-manager   0                   cc69673acc7e6       kube-controller-manager-ingress-addon-legacy-789037
	ec3083888992b       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   4 minutes ago       Running             kube-scheduler            0                   78356870223a7       kube-scheduler-ingress-addon-legacy-789037
	113e171cd7870       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   4 minutes ago       Running             etcd                      0                   c56d07f059cb4       etcd-ingress-addon-legacy-789037
	
	* 
	* ==> coredns [75f59a4154fb3215562d0a89a5fea82aad5f67a81d3fd733c9f1a6e13e027c5c] <==
	* [INFO] 10.244.0.5:41160 - 59684 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057248s
	[INFO] 10.244.0.5:41160 - 18346 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044521s
	[INFO] 10.244.0.5:58787 - 8570 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00184561s
	[INFO] 10.244.0.5:41160 - 64265 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000881135s
	[INFO] 10.244.0.5:58787 - 19581 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000122446s
	[INFO] 10.244.0.5:41160 - 14125 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001010793s
	[INFO] 10.244.0.5:41160 - 42551 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000059807s
	[INFO] 10.244.0.5:43725 - 21271 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000081805s
	[INFO] 10.244.0.5:34268 - 29843 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000061185s
	[INFO] 10.244.0.5:34268 - 37004 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000054367s
	[INFO] 10.244.0.5:43725 - 22637 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000033846s
	[INFO] 10.244.0.5:34268 - 34203 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037809s
	[INFO] 10.244.0.5:43725 - 23458 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032197s
	[INFO] 10.244.0.5:34268 - 53431 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033264s
	[INFO] 10.244.0.5:43725 - 52480 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000031688s
	[INFO] 10.244.0.5:34268 - 53166 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000056591s
	[INFO] 10.244.0.5:43725 - 27781 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040525s
	[INFO] 10.244.0.5:34268 - 36399 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000027504s
	[INFO] 10.244.0.5:43725 - 9374 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000022293s
	[INFO] 10.244.0.5:34268 - 57646 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000956803s
	[INFO] 10.244.0.5:43725 - 58121 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001165091s
	[INFO] 10.244.0.5:43725 - 48430 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001048348s
	[INFO] 10.244.0.5:34268 - 22424 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001142174s
	[INFO] 10.244.0.5:43725 - 44711 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000036972s
	[INFO] 10.244.0.5:34268 - 33015 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000034864s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-789037
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-789037
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90
	                    minikube.k8s.io/name=ingress-addon-legacy-789037
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_09T23_08_18_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Oct 2023 23:08:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-789037
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Oct 2023 23:12:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Oct 2023 23:11:51 +0000   Mon, 09 Oct 2023 23:08:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Oct 2023 23:11:51 +0000   Mon, 09 Oct 2023 23:08:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Oct 2023 23:11:51 +0000   Mon, 09 Oct 2023 23:08:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Oct 2023 23:11:51 +0000   Mon, 09 Oct 2023 23:08:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-789037
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 bf7f6b6b378241b39c570b9bb3adac1d
	  System UUID:                45726716-5d0e-4f30-a269-c306c124131c
	  Boot ID:                    049a78d9-9f92-4a07-bf20-80a1aba53693
	  Kernel Version:             5.15.0-1047-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-7hn8j                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 coredns-66bff467f8-9nvfw                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m36s
	  kube-system                 etcd-ingress-addon-legacy-789037                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kindnet-m4l94                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m36s
	  kube-system                 kube-apiserver-ingress-addon-legacy-789037             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-789037    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 kube-proxy-nsqnq                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 kube-scheduler-ingress-addon-legacy-789037             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  4m1s (x5 over 4m2s)  kubelet     Node ingress-addon-legacy-789037 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s (x5 over 4m2s)  kubelet     Node ingress-addon-legacy-789037 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s (x4 over 4m2s)  kubelet     Node ingress-addon-legacy-789037 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m48s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m47s                kubelet     Node ingress-addon-legacy-789037 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m47s                kubelet     Node ingress-addon-legacy-789037 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m47s                kubelet     Node ingress-addon-legacy-789037 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m34s                kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m27s                kubelet     Node ingress-addon-legacy-789037 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001188] FS-Cache: O-key=[8] 'ed75ed0000000000'
	[  +0.000841] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000972] FS-Cache: N-cookie d=00000000cbdaf303{9p.inode} n=00000000f000a426
	[  +0.001097] FS-Cache: N-key=[8] 'ed75ed0000000000'
	[  +0.002696] FS-Cache: Duplicate cookie detected
	[  +0.000758] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.000976] FS-Cache: O-cookie d=00000000cbdaf303{9p.inode} n=00000000e2aea24f
	[  +0.001120] FS-Cache: O-key=[8] 'ed75ed0000000000'
	[  +0.000757] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001016] FS-Cache: N-cookie d=00000000cbdaf303{9p.inode} n=000000000a2d4cb7
	[  +0.001098] FS-Cache: N-key=[8] 'ed75ed0000000000'
	[  +2.824684] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001053] FS-Cache: O-cookie d=00000000cbdaf303{9p.inode} n=0000000017dc1af4
	[  +0.001081] FS-Cache: O-key=[8] 'ec75ed0000000000'
	[  +0.000718] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000968] FS-Cache: N-cookie d=00000000cbdaf303{9p.inode} n=00000000f000a426
	[  +0.001070] FS-Cache: N-key=[8] 'ec75ed0000000000'
	[  +0.325722] FS-Cache: Duplicate cookie detected
	[  +0.000761] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.000956] FS-Cache: O-cookie d=00000000cbdaf303{9p.inode} n=00000000142f07e9
	[  +0.001082] FS-Cache: O-key=[8] 'f375ed0000000000'
	[  +0.000768] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000938] FS-Cache: N-cookie d=00000000cbdaf303{9p.inode} n=000000009f4943e1
	[  +0.001027] FS-Cache: N-key=[8] 'f375ed0000000000'
	
	* 
	* ==> etcd [113e171cd7870fce83d252b6dc8f90da667cca0e35f3235bc1e3288a27c0c575] <==
	* raft2023/10/09 23:08:08 INFO: aec36adc501070cc became follower at term 0
	raft2023/10/09 23:08:08 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/10/09 23:08:08 INFO: aec36adc501070cc became follower at term 1
	raft2023/10/09 23:08:08 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-09 23:08:08.250527 W | auth: simple token is not cryptographically signed
	2023-10-09 23:08:08.254998 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	raft2023/10/09 23:08:08 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-10-09 23:08:08.257880 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-10-09 23:08:08.257943 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-10-09 23:08:08.258017 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-10-09 23:08:08.258102 I | embed: listening for peers on 192.168.49.2:2380
	2023-10-09 23:08:08.258239 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/10/09 23:08:08 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/10/09 23:08:08 INFO: aec36adc501070cc became candidate at term 2
	raft2023/10/09 23:08:08 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/10/09 23:08:08 INFO: aec36adc501070cc became leader at term 2
	raft2023/10/09 23:08:08 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-10-09 23:08:09.059154 I | embed: ready to serve client requests
	2023-10-09 23:08:09.157267 I | embed: ready to serve client requests
	2023-10-09 23:08:09.187179 I | etcdserver: published {Name:ingress-addon-legacy-789037 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-10-09 23:08:09.400675 I | etcdserver: setting up the initial cluster version to 3.4
	2023-10-09 23:08:09.443139 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-09 23:08:09.455080 I | embed: serving client requests on 192.168.49.2:2379
	2023-10-09 23:08:09.663940 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-10-09 23:08:09.664108 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  23:12:08 up  6:54,  0 users,  load average: 0.11, 0.87, 1.45
	Linux ingress-addon-legacy-789037 5.15.0-1047-aws #52~20.04.1-Ubuntu SMP Thu Sep 21 10:08:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [e4dcbd0c5c1429a9d175b0a851de5fa28ead1b40c03dee43cac1f9469e71ec13] <==
	* I1009 23:10:05.795439       1 main.go:227] handling current node
	I1009 23:10:15.804397       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:10:15.804619       1 main.go:227] handling current node
	I1009 23:10:25.810855       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:10:25.810884       1 main.go:227] handling current node
	I1009 23:10:35.814817       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:10:35.814849       1 main.go:227] handling current node
	I1009 23:10:45.823249       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:10:45.823279       1 main.go:227] handling current node
	I1009 23:10:55.826529       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:10:55.826559       1 main.go:227] handling current node
	I1009 23:11:05.831398       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:11:05.831431       1 main.go:227] handling current node
	I1009 23:11:15.835037       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:11:15.835068       1 main.go:227] handling current node
	I1009 23:11:25.843814       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:11:25.843844       1 main.go:227] handling current node
	I1009 23:11:35.848268       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:11:35.848297       1 main.go:227] handling current node
	I1009 23:11:45.861427       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:11:45.861458       1 main.go:227] handling current node
	I1009 23:11:55.870567       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:11:55.870599       1 main.go:227] handling current node
	I1009 23:12:05.881967       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1009 23:12:05.881997       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [0f748a0677244cc757e2c37b2978a8ff4463335bfa399590de4ef8a5c9b866a8] <==
	* I1009 23:08:14.623872       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1009 23:08:14.624000       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1009 23:08:14.624078       1 cache.go:39] Caches are synced for autoregister controller
	I1009 23:08:14.632010       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1009 23:08:14.672113       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1009 23:08:15.405325       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1009 23:08:15.405352       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1009 23:08:15.417387       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1009 23:08:15.421419       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1009 23:08:15.421439       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1009 23:08:15.846548       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 23:08:15.885062       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1009 23:08:15.951767       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1009 23:08:15.952996       1 controller.go:609] quota admission added evaluator for: endpoints
	I1009 23:08:15.959387       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 23:08:16.865516       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1009 23:08:17.691463       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1009 23:08:17.780012       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1009 23:08:21.054337       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 23:08:32.314327       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1009 23:08:32.360613       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1009 23:08:58.990944       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1009 23:09:23.054125       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1009 23:12:00.594687       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	E1009 23:12:00.864641       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [835422e9705c9e9d0a8d12baa5dc13596faf3ae1f91c223d1a1cf81cb62e9db6] <==
	* reemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400015ba70)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40014f44d8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E1009 23:08:32.539347       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"84eda2fc-0d27-4b85-b1d3-06c2c03a02c0", ResourceVersion:"224", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63832489698, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40016c3f00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40016c3f20)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40016c3f40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40016c3f60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40016c3f80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40016c3fa0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40016c3fc0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40017f0000)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400106ceb0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40014f46d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004f12d0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400015ba80)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40014f4720)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1009 23:08:32.629489       1 shared_informer.go:230] Caches are synced for disruption 
	I1009 23:08:32.629512       1 disruption.go:339] Sending events to api server.
	I1009 23:08:32.681507       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I1009 23:08:32.706044       1 shared_informer.go:230] Caches are synced for attach detach 
	I1009 23:08:32.823419       1 shared_informer.go:230] Caches are synced for HPA 
	I1009 23:08:32.872856       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1009 23:08:32.872880       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1009 23:08:32.897718       1 shared_informer.go:230] Caches are synced for resource quota 
	I1009 23:08:32.930260       1 shared_informer.go:230] Caches are synced for resource quota 
	I1009 23:08:32.995355       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I1009 23:08:32.995444       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1009 23:08:33.027531       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"017d1d7c-7f74-49b3-96bd-e85c0d7b80fb", APIVersion:"apps/v1", ResourceVersion:"371", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1009 23:08:33.200652       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"77c352c1-b187-4723-9b69-a5f96ee23ac2", APIVersion:"apps/v1", ResourceVersion:"372", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-smjhv
	I1009 23:08:42.339756       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1009 23:08:58.974161       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"3164ef7f-a04c-4ef0-adbf-29ba1b6606a7", APIVersion:"apps/v1", ResourceVersion:"484", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1009 23:08:59.017900       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"0e605b5b-b207-4617-848a-9e8f2dc397f3", APIVersion:"apps/v1", ResourceVersion:"485", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-w8s2n
	I1009 23:08:59.045287       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"51b3f5ec-2d14-4f88-96f4-6eac1a4da853", APIVersion:"batch/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-7wxwd
	I1009 23:08:59.135989       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"2ce4b8be-1e56-4a41-9709-3a10b9d863b3", APIVersion:"batch/v1", ResourceVersion:"501", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-k9fnz
	I1009 23:09:03.204477       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"2ce4b8be-1e56-4a41-9709-3a10b9d863b3", APIVersion:"batch/v1", ResourceVersion:"510", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1009 23:09:03.228488       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"51b3f5ec-2d14-4f88-96f4-6eac1a4da853", APIVersion:"batch/v1", ResourceVersion:"499", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1009 23:11:42.603722       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"c3e9caee-ffcb-44e1-a52f-a7bf96380cf2", APIVersion:"apps/v1", ResourceVersion:"722", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1009 23:11:42.623562       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"2848e5b9-750c-4271-b11e-1d4d0493f6b0", APIVersion:"apps/v1", ResourceVersion:"723", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-7hn8j
	
	* 
	* ==> kube-proxy [06107dc5b93bed0b2ec4366eced42b835bbf4bf12067b12f344a14f08c392045] <==
	* W1009 23:08:33.969442       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1009 23:08:34.081744       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1009 23:08:34.081860       1 server_others.go:186] Using iptables Proxier.
	I1009 23:08:34.092013       1 server.go:583] Version: v1.18.20
	I1009 23:08:34.100485       1 config.go:133] Starting endpoints config controller
	I1009 23:08:34.115770       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1009 23:08:34.131543       1 config.go:315] Starting service config controller
	I1009 23:08:34.131640       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1009 23:08:34.131670       1 shared_informer.go:230] Caches are synced for service config 
	I1009 23:08:34.224866       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [ec3083888992b6c2a3cc6655f13fe4559165ce44a2962a005fed887bde39ed5f] <==
	* I1009 23:08:14.619975       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1009 23:08:14.623795       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1009 23:08:14.629114       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 23:08:14.629321       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1009 23:08:14.629439       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 23:08:14.629550       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1009 23:08:14.629649       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 23:08:14.629750       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 23:08:14.636956       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1009 23:08:14.637294       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 23:08:14.637423       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 23:08:14.637547       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 23:08:14.637673       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1009 23:08:14.637785       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 23:08:15.486204       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 23:08:15.510721       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 23:08:15.526186       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 23:08:15.566741       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 23:08:15.669526       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1009 23:08:15.679644       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 23:08:15.699331       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1009 23:08:16.127047       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1009 23:08:32.487546       1 factory.go:503] pod kube-system/coredns-66bff467f8-smjhv is already present in the backoff queue
	E1009 23:08:32.508063       1 factory.go:503] pod: kube-system/coredns-66bff467f8-9nvfw is already present in the active queue
	E1009 23:08:34.060275       1 factory.go:503] pod: kube-system/storage-provisioner is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Oct 09 23:11:47 ingress-addon-legacy-789037 kubelet[1648]: I1009 23:11:47.577041    1648 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: fddb792549fd58f58779d83e67d572a0554391dbbd10b6d791500ee715b45bf3
	Oct 09 23:11:47 ingress-addon-legacy-789037 kubelet[1648]: I1009 23:11:47.577145    1648 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 936fe200d49d99f6b3c41c16541e70535e1e465fc6fe8eb5c80bce4efa8516f5
	Oct 09 23:11:47 ingress-addon-legacy-789037 kubelet[1648]: E1009 23:11:47.577420    1648 pod_workers.go:191] Error syncing pod 3d53db3b-0a67-4492-9ae0-2ea9505ec0f9 ("hello-world-app-5f5d8b66bb-7hn8j_default(3d53db3b-0a67-4492-9ae0-2ea9505ec0f9)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-7hn8j_default(3d53db3b-0a67-4492-9ae0-2ea9505ec0f9)"
	Oct 09 23:11:48 ingress-addon-legacy-789037 kubelet[1648]: I1009 23:11:48.579666    1648 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 936fe200d49d99f6b3c41c16541e70535e1e465fc6fe8eb5c80bce4efa8516f5
	Oct 09 23:11:48 ingress-addon-legacy-789037 kubelet[1648]: E1009 23:11:48.579917    1648 pod_workers.go:191] Error syncing pod 3d53db3b-0a67-4492-9ae0-2ea9505ec0f9 ("hello-world-app-5f5d8b66bb-7hn8j_default(3d53db3b-0a67-4492-9ae0-2ea9505ec0f9)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-7hn8j_default(3d53db3b-0a67-4492-9ae0-2ea9505ec0f9)"
	Oct 09 23:11:49 ingress-addon-legacy-789037 kubelet[1648]: E1009 23:11:49.030669    1648 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 09 23:11:49 ingress-addon-legacy-789037 kubelet[1648]: E1009 23:11:49.030711    1648 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 09 23:11:49 ingress-addon-legacy-789037 kubelet[1648]: E1009 23:11:49.030755    1648 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Oct 09 23:11:49 ingress-addon-legacy-789037 kubelet[1648]: E1009 23:11:49.030797    1648 pod_workers.go:191] Error syncing pod d31f480c-6952-4b6f-b16b-727dc690e631 ("kube-ingress-dns-minikube_kube-system(d31f480c-6952-4b6f-b16b-727dc690e631)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Oct 09 23:11:58 ingress-addon-legacy-789037 kubelet[1648]: I1009 23:11:58.533488    1648 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-59qvv" (UniqueName: "kubernetes.io/secret/d31f480c-6952-4b6f-b16b-727dc690e631-minikube-ingress-dns-token-59qvv") pod "d31f480c-6952-4b6f-b16b-727dc690e631" (UID: "d31f480c-6952-4b6f-b16b-727dc690e631")
	Oct 09 23:11:58 ingress-addon-legacy-789037 kubelet[1648]: I1009 23:11:58.538227    1648 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d31f480c-6952-4b6f-b16b-727dc690e631-minikube-ingress-dns-token-59qvv" (OuterVolumeSpecName: "minikube-ingress-dns-token-59qvv") pod "d31f480c-6952-4b6f-b16b-727dc690e631" (UID: "d31f480c-6952-4b6f-b16b-727dc690e631"). InnerVolumeSpecName "minikube-ingress-dns-token-59qvv". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 09 23:11:58 ingress-addon-legacy-789037 kubelet[1648]: I1009 23:11:58.633884    1648 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-59qvv" (UniqueName: "kubernetes.io/secret/d31f480c-6952-4b6f-b16b-727dc690e631-minikube-ingress-dns-token-59qvv") on node "ingress-addon-legacy-789037" DevicePath ""
	Oct 09 23:12:00 ingress-addon-legacy-789037 kubelet[1648]: E1009 23:12:00.848390    1648 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-w8s2n.178c934d89b3c386", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-w8s2n", UID:"475013f1-6771-4f67-ac73-504222e4fac5", APIVersion:"v1", ResourceVersion:"495", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-789037"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1414010324c4386, ext:223293697657, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1414010324c4386, ext:223293697657, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-w8s2n.178c934d89b3c386" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 09 23:12:00 ingress-addon-legacy-789037 kubelet[1648]: E1009 23:12:00.867075    1648 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-w8s2n.178c934d89b3c386", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-w8s2n", UID:"475013f1-6771-4f67-ac73-504222e4fac5", APIVersion:"v1", ResourceVersion:"495", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-789037"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1414010324c4386, ext:223293697657, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc141401033478871, ext:223310164836, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-w8s2n.178c934d89b3c386" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Oct 09 23:12:02 ingress-addon-legacy-789037 kubelet[1648]: I1009 23:12:02.029712    1648 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 936fe200d49d99f6b3c41c16541e70535e1e465fc6fe8eb5c80bce4efa8516f5
	Oct 09 23:12:02 ingress-addon-legacy-789037 kubelet[1648]: I1009 23:12:02.602457    1648 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 936fe200d49d99f6b3c41c16541e70535e1e465fc6fe8eb5c80bce4efa8516f5
	Oct 09 23:12:02 ingress-addon-legacy-789037 kubelet[1648]: I1009 23:12:02.602688    1648 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 1407e0e4216db7f7cf98d680dd1d2f24c7c91006227060f8a51ab256c9d1304c
	Oct 09 23:12:02 ingress-addon-legacy-789037 kubelet[1648]: E1009 23:12:02.602938    1648 pod_workers.go:191] Error syncing pod 3d53db3b-0a67-4492-9ae0-2ea9505ec0f9 ("hello-world-app-5f5d8b66bb-7hn8j_default(3d53db3b-0a67-4492-9ae0-2ea9505ec0f9)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-7hn8j_default(3d53db3b-0a67-4492-9ae0-2ea9505ec0f9)"
	Oct 09 23:12:03 ingress-addon-legacy-789037 kubelet[1648]: W1009 23:12:03.614605    1648 pod_container_deletor.go:77] Container "d2882aa4177ecb2c48eacdd6cd0f77e7510e8422740ad3a9b9d644ee97f1e4ff" not found in pod's containers
	Oct 09 23:12:04 ingress-addon-legacy-789037 kubelet[1648]: I1009 23:12:04.981680    1648 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/475013f1-6771-4f67-ac73-504222e4fac5-webhook-cert") pod "475013f1-6771-4f67-ac73-504222e4fac5" (UID: "475013f1-6771-4f67-ac73-504222e4fac5")
	Oct 09 23:12:04 ingress-addon-legacy-789037 kubelet[1648]: I1009 23:12:04.981745    1648 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-zscv2" (UniqueName: "kubernetes.io/secret/475013f1-6771-4f67-ac73-504222e4fac5-ingress-nginx-token-zscv2") pod "475013f1-6771-4f67-ac73-504222e4fac5" (UID: "475013f1-6771-4f67-ac73-504222e4fac5")
	Oct 09 23:12:04 ingress-addon-legacy-789037 kubelet[1648]: I1009 23:12:04.987977    1648 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/475013f1-6771-4f67-ac73-504222e4fac5-ingress-nginx-token-zscv2" (OuterVolumeSpecName: "ingress-nginx-token-zscv2") pod "475013f1-6771-4f67-ac73-504222e4fac5" (UID: "475013f1-6771-4f67-ac73-504222e4fac5"). InnerVolumeSpecName "ingress-nginx-token-zscv2". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 09 23:12:04 ingress-addon-legacy-789037 kubelet[1648]: I1009 23:12:04.989075    1648 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/475013f1-6771-4f67-ac73-504222e4fac5-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "475013f1-6771-4f67-ac73-504222e4fac5" (UID: "475013f1-6771-4f67-ac73-504222e4fac5"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Oct 09 23:12:05 ingress-addon-legacy-789037 kubelet[1648]: I1009 23:12:05.082099    1648 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/475013f1-6771-4f67-ac73-504222e4fac5-webhook-cert") on node "ingress-addon-legacy-789037" DevicePath ""
	Oct 09 23:12:05 ingress-addon-legacy-789037 kubelet[1648]: I1009 23:12:05.082145    1648 reconciler.go:319] Volume detached for volume "ingress-nginx-token-zscv2" (UniqueName: "kubernetes.io/secret/475013f1-6771-4f67-ac73-504222e4fac5-ingress-nginx-token-zscv2") on node "ingress-addon-legacy-789037" DevicePath ""
	
	* 
	* ==> storage-provisioner [7fd811fc78f384f58cf1e499ee5fe828567878d985868688eab9bd7eb3419653] <==
	* I1009 23:08:46.919396       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 23:08:46.932744       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 23:08:46.932857       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 23:08:46.940008       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 23:08:46.940710       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-789037_e3eb423f-3b02-43b6-9346-1f678128b792!
	I1009 23:08:46.941699       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a639e1c1-8c43-493a-b9d5-70cb15e8cec9", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-789037_e3eb423f-3b02-43b6-9346-1f678128b792 became leader
	I1009 23:08:47.041146       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-789037_e3eb423f-3b02-43b6-9346-1f678128b792!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-789037 -n ingress-addon-legacy-789037
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-789037 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (177.35s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717678 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717678 -- exec busybox-5bc68d56bd-2rmqx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717678 -- exec busybox-5bc68d56bd-2rmqx -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-717678 -- exec busybox-5bc68d56bd-2rmqx -- sh -c "ping -c 1 192.168.58.1": exit status 1 (258.025801ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-2rmqx): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717678 -- exec busybox-5bc68d56bd-5q5k2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717678 -- exec busybox-5bc68d56bd-5q5k2 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-717678 -- exec busybox-5bc68d56bd-5q5k2 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (259.26656ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-5q5k2): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-717678
helpers_test.go:235: (dbg) docker inspect multinode-717678:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4263e5d8fe6b4225f635cb6100a7248d26a60b28f7521d97b02e4d683d7c37c9",
	        "Created": "2023-10-09T23:18:46.10026197Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1609555,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-09T23:18:46.427634707Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7c31788aee97084e64d3a410721295a10fc01c1f34b468c1bc9be09686708026",
	        "ResolvConfPath": "/var/lib/docker/containers/4263e5d8fe6b4225f635cb6100a7248d26a60b28f7521d97b02e4d683d7c37c9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4263e5d8fe6b4225f635cb6100a7248d26a60b28f7521d97b02e4d683d7c37c9/hostname",
	        "HostsPath": "/var/lib/docker/containers/4263e5d8fe6b4225f635cb6100a7248d26a60b28f7521d97b02e4d683d7c37c9/hosts",
	        "LogPath": "/var/lib/docker/containers/4263e5d8fe6b4225f635cb6100a7248d26a60b28f7521d97b02e4d683d7c37c9/4263e5d8fe6b4225f635cb6100a7248d26a60b28f7521d97b02e4d683d7c37c9-json.log",
	        "Name": "/multinode-717678",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-717678:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-717678",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e60218c7fffb2c9ed2a0bc8c73b84f7e7bf1d79430c8ea1424e87dae51897475-init/diff:/var/lib/docker/overlay2/ef9093ba51e6eb88ff4b48fff9bf153334448175aa68f58581a9571eed9ca4f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e60218c7fffb2c9ed2a0bc8c73b84f7e7bf1d79430c8ea1424e87dae51897475/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e60218c7fffb2c9ed2a0bc8c73b84f7e7bf1d79430c8ea1424e87dae51897475/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e60218c7fffb2c9ed2a0bc8c73b84f7e7bf1d79430c8ea1424e87dae51897475/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-717678",
	                "Source": "/var/lib/docker/volumes/multinode-717678/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-717678",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-717678",
	                "name.minikube.sigs.k8s.io": "multinode-717678",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1e24c552d629a7bb2eee522d70917a7b39868680d5dbf22223ce755c025abaa1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34434"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34433"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34430"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34432"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34431"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1e24c552d629",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-717678": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4263e5d8fe6b",
	                        "multinode-717678"
	                    ],
	                    "NetworkID": "7fa9be4abd6fece70bca4dfcabfe3d5a5058fcfbeed3e4c1ac63624386e60ae1",
	                    "EndpointID": "a431850318b1426113df3a89aed4fef030a86d1da0d40a419c66b847371dddd5",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-717678 -n multinode-717678
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-717678 logs -n 25: (1.548436243s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-683585                           | mount-start-2-683585 | jenkins | v1.31.2 | 09 Oct 23 23:18 UTC | 09 Oct 23 23:18 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-683585 ssh -- ls                    | mount-start-2-683585 | jenkins | v1.31.2 | 09 Oct 23 23:18 UTC | 09 Oct 23 23:18 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-681714                           | mount-start-1-681714 | jenkins | v1.31.2 | 09 Oct 23 23:18 UTC | 09 Oct 23 23:18 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-683585 ssh -- ls                    | mount-start-2-683585 | jenkins | v1.31.2 | 09 Oct 23 23:18 UTC | 09 Oct 23 23:18 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-683585                           | mount-start-2-683585 | jenkins | v1.31.2 | 09 Oct 23 23:18 UTC | 09 Oct 23 23:18 UTC |
	| start   | -p mount-start-2-683585                           | mount-start-2-683585 | jenkins | v1.31.2 | 09 Oct 23 23:18 UTC | 09 Oct 23 23:18 UTC |
	| ssh     | mount-start-2-683585 ssh -- ls                    | mount-start-2-683585 | jenkins | v1.31.2 | 09 Oct 23 23:18 UTC | 09 Oct 23 23:18 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-683585                           | mount-start-2-683585 | jenkins | v1.31.2 | 09 Oct 23 23:18 UTC | 09 Oct 23 23:18 UTC |
	| delete  | -p mount-start-1-681714                           | mount-start-1-681714 | jenkins | v1.31.2 | 09 Oct 23 23:18 UTC | 09 Oct 23 23:18 UTC |
	| start   | -p multinode-717678                               | multinode-717678     | jenkins | v1.31.2 | 09 Oct 23 23:18 UTC | 09 Oct 23 23:20 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-717678 -- apply -f                   | multinode-717678     | jenkins | v1.31.2 | 09 Oct 23 23:20 UTC | 09 Oct 23 23:20 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-717678 -- rollout                    | multinode-717678     | jenkins | v1.31.2 | 09 Oct 23 23:20 UTC | 09 Oct 23 23:20 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-717678 -- get pods -o                | multinode-717678     | jenkins | v1.31.2 | 09 Oct 23 23:20 UTC | 09 Oct 23 23:20 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-717678 -- get pods -o                | multinode-717678     | jenkins | v1.31.2 | 09 Oct 23 23:20 UTC | 09 Oct 23 23:20 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-717678 -- exec                       | multinode-717678     | jenkins | v1.31.2 | 09 Oct 23 23:20 UTC | 09 Oct 23 23:20 UTC |
	|         | busybox-5bc68d56bd-2rmqx --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-717678 -- exec                       | multinode-717678     | jenkins | v1.31.2 | 09 Oct 23 23:20 UTC | 09 Oct 23 23:20 UTC |
	|         | busybox-5bc68d56bd-5q5k2 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-717678 -- exec                       | multinode-717678     | jenkins | v1.31.2 | 09 Oct 23 23:20 UTC | 09 Oct 23 23:20 UTC |
	|         | busybox-5bc68d56bd-2rmqx --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-717678 -- exec                       | multinode-717678     | jenkins | v1.31.2 | 09 Oct 23 23:20 UTC | 09 Oct 23 23:20 UTC |
	|         | busybox-5bc68d56bd-5q5k2 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-717678 -- exec                       | multinode-717678     | jenkins | v1.31.2 | 09 Oct 23 23:20 UTC | 09 Oct 23 23:20 UTC |
	|         | busybox-5bc68d56bd-2rmqx -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-717678 -- exec                       | multinode-717678     | jenkins | v1.31.2 | 09 Oct 23 23:20 UTC | 09 Oct 23 23:20 UTC |
	|         | busybox-5bc68d56bd-5q5k2 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-717678 -- get pods -o                | multinode-717678     | jenkins | v1.31.2 | 09 Oct 23 23:20 UTC | 09 Oct 23 23:20 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-717678 -- exec                       | multinode-717678     | jenkins | v1.31.2 | 09 Oct 23 23:20 UTC | 09 Oct 23 23:20 UTC |
	|         | busybox-5bc68d56bd-2rmqx                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-717678 -- exec                       | multinode-717678     | jenkins | v1.31.2 | 09 Oct 23 23:20 UTC |                     |
	|         | busybox-5bc68d56bd-2rmqx -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-717678 -- exec                       | multinode-717678     | jenkins | v1.31.2 | 09 Oct 23 23:20 UTC | 09 Oct 23 23:20 UTC |
	|         | busybox-5bc68d56bd-5q5k2                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-717678 -- exec                       | multinode-717678     | jenkins | v1.31.2 | 09 Oct 23 23:20 UTC |                     |
	|         | busybox-5bc68d56bd-5q5k2 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/09 23:18:40
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 23:18:40.561573 1609109 out.go:296] Setting OutFile to fd 1 ...
	I1009 23:18:40.561720 1609109 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:18:40.561729 1609109 out.go:309] Setting ErrFile to fd 2...
	I1009 23:18:40.561735 1609109 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:18:40.561999 1609109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-1537865/.minikube/bin
	I1009 23:18:40.562406 1609109 out.go:303] Setting JSON to false
	I1009 23:18:40.563392 1609109 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":25264,"bootTime":1696868257,"procs":255,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1009 23:18:40.563486 1609109 start.go:138] virtualization:  
	I1009 23:18:40.565899 1609109 out.go:177] * [multinode-717678] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1009 23:18:40.568313 1609109 out.go:177]   - MINIKUBE_LOCATION=17375
	I1009 23:18:40.570209 1609109 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 23:18:40.568467 1609109 notify.go:220] Checking for updates...
	I1009 23:18:40.574590 1609109 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig
	I1009 23:18:40.576661 1609109 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	I1009 23:18:40.578755 1609109 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 23:18:40.580798 1609109 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 23:18:40.583177 1609109 driver.go:378] Setting default libvirt URI to qemu:///system
	I1009 23:18:40.608047 1609109 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1009 23:18:40.608154 1609109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 23:18:40.689374 1609109 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-09 23:18:40.679561172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1009 23:18:40.689482 1609109 docker.go:295] overlay module found
	I1009 23:18:40.691722 1609109 out.go:177] * Using the docker driver based on user configuration
	I1009 23:18:40.693446 1609109 start.go:298] selected driver: docker
	I1009 23:18:40.693462 1609109 start.go:902] validating driver "docker" against <nil>
	I1009 23:18:40.693473 1609109 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 23:18:40.694108 1609109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 23:18:40.761432 1609109 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-09 23:18:40.751922247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1009 23:18:40.761603 1609109 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1009 23:18:40.761826 1609109 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 23:18:40.763772 1609109 out.go:177] * Using Docker driver with root privileges
	I1009 23:18:40.765721 1609109 cni.go:84] Creating CNI manager for ""
	I1009 23:18:40.765744 1609109 cni.go:136] 0 nodes found, recommending kindnet
	I1009 23:18:40.765756 1609109 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 23:18:40.765771 1609109 start_flags.go:323] config:
	{Name:multinode-717678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-717678 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 23:18:40.768357 1609109 out.go:177] * Starting control plane node multinode-717678 in cluster multinode-717678
	I1009 23:18:40.770264 1609109 cache.go:122] Beginning downloading kic base image for docker with crio
	I1009 23:18:40.772501 1609109 out.go:177] * Pulling base image ...
	I1009 23:18:40.774984 1609109 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1009 23:18:40.775057 1609109 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1009 23:18:40.775072 1609109 cache.go:57] Caching tarball of preloaded images
	I1009 23:18:40.775071 1609109 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1009 23:18:40.775190 1609109 preload.go:174] Found /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 23:18:40.775201 1609109 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1009 23:18:40.775548 1609109 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/config.json ...
	I1009 23:18:40.775580 1609109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/config.json: {Name:mk4f03389e6ddc9bf43d4f4eb3e1253daadc12c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:18:40.792687 1609109 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1009 23:18:40.792711 1609109 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1009 23:18:40.792738 1609109 cache.go:195] Successfully downloaded all kic artifacts
	I1009 23:18:40.792793 1609109 start.go:365] acquiring machines lock for multinode-717678: {Name:mk2c2609ccef425bc1a4193f25a2505e9bbef29f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:18:40.792924 1609109 start.go:369] acquired machines lock for "multinode-717678" in 112.518µs
	I1009 23:18:40.792953 1609109 start.go:93] Provisioning new machine with config: &{Name:multinode-717678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-717678 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 23:18:40.793039 1609109 start.go:125] createHost starting for "" (driver="docker")
	I1009 23:18:40.795571 1609109 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1009 23:18:40.795870 1609109 start.go:159] libmachine.API.Create for "multinode-717678" (driver="docker")
	I1009 23:18:40.795904 1609109 client.go:168] LocalClient.Create starting
	I1009 23:18:40.796002 1609109 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem
	I1009 23:18:40.796046 1609109 main.go:141] libmachine: Decoding PEM data...
	I1009 23:18:40.796068 1609109 main.go:141] libmachine: Parsing certificate...
	I1009 23:18:40.796128 1609109 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem
	I1009 23:18:40.796153 1609109 main.go:141] libmachine: Decoding PEM data...
	I1009 23:18:40.796165 1609109 main.go:141] libmachine: Parsing certificate...
	I1009 23:18:40.796556 1609109 cli_runner.go:164] Run: docker network inspect multinode-717678 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 23:18:40.818113 1609109 cli_runner.go:211] docker network inspect multinode-717678 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 23:18:40.818194 1609109 network_create.go:281] running [docker network inspect multinode-717678] to gather additional debugging logs...
	I1009 23:18:40.818217 1609109 cli_runner.go:164] Run: docker network inspect multinode-717678
	W1009 23:18:40.835519 1609109 cli_runner.go:211] docker network inspect multinode-717678 returned with exit code 1
	I1009 23:18:40.835555 1609109 network_create.go:284] error running [docker network inspect multinode-717678]: docker network inspect multinode-717678: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-717678 not found
	I1009 23:18:40.835569 1609109 network_create.go:286] output of [docker network inspect multinode-717678]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-717678 not found
	
	** /stderr **
	I1009 23:18:40.835694 1609109 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 23:18:40.853469 1609109 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bbbaf27e04e4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:09:6a:d9:0c} reservation:<nil>}
	I1009 23:18:40.853816 1609109 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400251f3f0}
	I1009 23:18:40.853838 1609109 network_create.go:124] attempt to create docker network multinode-717678 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1009 23:18:40.853899 1609109 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-717678 multinode-717678
	I1009 23:18:40.928792 1609109 network_create.go:108] docker network multinode-717678 192.168.58.0/24 created
	I1009 23:18:40.928825 1609109 kic.go:118] calculated static IP "192.168.58.2" for the "multinode-717678" container
	I1009 23:18:40.928910 1609109 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 23:18:40.945566 1609109 cli_runner.go:164] Run: docker volume create multinode-717678 --label name.minikube.sigs.k8s.io=multinode-717678 --label created_by.minikube.sigs.k8s.io=true
	I1009 23:18:40.964882 1609109 oci.go:103] Successfully created a docker volume multinode-717678
	I1009 23:18:40.965009 1609109 cli_runner.go:164] Run: docker run --rm --name multinode-717678-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-717678 --entrypoint /usr/bin/test -v multinode-717678:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1009 23:18:41.549530 1609109 oci.go:107] Successfully prepared a docker volume multinode-717678
	I1009 23:18:41.549574 1609109 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1009 23:18:41.549595 1609109 kic.go:191] Starting extracting preloaded images to volume ...
	I1009 23:18:41.549706 1609109 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-717678:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 23:18:46.014598 1609109 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-717678:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.464682886s)
	I1009 23:18:46.014638 1609109 kic.go:200] duration metric: took 4.465039 seconds to extract preloaded images to volume
	W1009 23:18:46.014899 1609109 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 23:18:46.015011 1609109 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 23:18:46.084364 1609109 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-717678 --name multinode-717678 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-717678 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-717678 --network multinode-717678 --ip 192.168.58.2 --volume multinode-717678:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1009 23:18:46.436484 1609109 cli_runner.go:164] Run: docker container inspect multinode-717678 --format={{.State.Running}}
	I1009 23:18:46.461378 1609109 cli_runner.go:164] Run: docker container inspect multinode-717678 --format={{.State.Status}}
	I1009 23:18:46.491372 1609109 cli_runner.go:164] Run: docker exec multinode-717678 stat /var/lib/dpkg/alternatives/iptables
	I1009 23:18:46.573772 1609109 oci.go:144] the created container "multinode-717678" has a running status.
	I1009 23:18:46.573818 1609109 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678/id_rsa...
	I1009 23:18:48.163265 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 23:18:48.163333 1609109 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 23:18:48.185470 1609109 cli_runner.go:164] Run: docker container inspect multinode-717678 --format={{.State.Status}}
	I1009 23:18:48.203877 1609109 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 23:18:48.203902 1609109 kic_runner.go:114] Args: [docker exec --privileged multinode-717678 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 23:18:48.267375 1609109 cli_runner.go:164] Run: docker container inspect multinode-717678 --format={{.State.Status}}
	I1009 23:18:48.288461 1609109 machine.go:88] provisioning docker machine ...
	I1009 23:18:48.288493 1609109 ubuntu.go:169] provisioning hostname "multinode-717678"
	I1009 23:18:48.288565 1609109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678
	I1009 23:18:48.306758 1609109 main.go:141] libmachine: Using SSH client type: native
	I1009 23:18:48.307258 1609109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34434 <nil> <nil>}
	I1009 23:18:48.307279 1609109 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-717678 && echo "multinode-717678" | sudo tee /etc/hostname
	I1009 23:18:48.454230 1609109 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-717678
	
	I1009 23:18:48.454315 1609109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678
	I1009 23:18:48.473684 1609109 main.go:141] libmachine: Using SSH client type: native
	I1009 23:18:48.474138 1609109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34434 <nil> <nil>}
	I1009 23:18:48.474160 1609109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-717678' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-717678/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-717678' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 23:18:48.604554 1609109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 23:18:48.604582 1609109 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17375-1537865/.minikube CaCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17375-1537865/.minikube}
	I1009 23:18:48.604615 1609109 ubuntu.go:177] setting up certificates
	I1009 23:18:48.604624 1609109 provision.go:83] configureAuth start
	I1009 23:18:48.604696 1609109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-717678
	I1009 23:18:48.623255 1609109 provision.go:138] copyHostCerts
	I1009 23:18:48.623302 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem
	I1009 23:18:48.623334 1609109 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem, removing ...
	I1009 23:18:48.623344 1609109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem
	I1009 23:18:48.623428 1609109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem (1078 bytes)
	I1009 23:18:48.623518 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem
	I1009 23:18:48.623540 1609109 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem, removing ...
	I1009 23:18:48.623544 1609109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem
	I1009 23:18:48.623572 1609109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem (1123 bytes)
	I1009 23:18:48.623623 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem
	I1009 23:18:48.623652 1609109 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem, removing ...
	I1009 23:18:48.623660 1609109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem
	I1009 23:18:48.623685 1609109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem (1679 bytes)
	I1009 23:18:48.623742 1609109 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem org=jenkins.multinode-717678 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-717678]
	I1009 23:18:49.642129 1609109 provision.go:172] copyRemoteCerts
	I1009 23:18:49.642199 1609109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 23:18:49.642251 1609109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678
	I1009 23:18:49.660989 1609109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34434 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678/id_rsa Username:docker}
	I1009 23:18:49.758236 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 23:18:49.758306 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 23:18:49.788411 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 23:18:49.788480 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1009 23:18:49.816900 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 23:18:49.816964 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 23:18:49.846185 1609109 provision.go:86] duration metric: configureAuth took 1.241512731s
	I1009 23:18:49.846211 1609109 ubuntu.go:193] setting minikube options for container-runtime
	I1009 23:18:49.846402 1609109 config.go:182] Loaded profile config "multinode-717678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1009 23:18:49.846547 1609109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678
	I1009 23:18:49.865087 1609109 main.go:141] libmachine: Using SSH client type: native
	I1009 23:18:49.865581 1609109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34434 <nil> <nil>}
	I1009 23:18:49.865606 1609109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 23:18:50.141326 1609109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 23:18:50.141351 1609109 machine.go:91] provisioned docker machine in 1.852869439s
	I1009 23:18:50.141362 1609109 client.go:171] LocalClient.Create took 9.345446663s
	I1009 23:18:50.141376 1609109 start.go:167] duration metric: libmachine.API.Create for "multinode-717678" took 9.345507233s
	I1009 23:18:50.141384 1609109 start.go:300] post-start starting for "multinode-717678" (driver="docker")
	I1009 23:18:50.141394 1609109 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 23:18:50.141465 1609109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 23:18:50.141513 1609109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678
	I1009 23:18:50.161786 1609109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34434 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678/id_rsa Username:docker}
	I1009 23:18:50.258308 1609109 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 23:18:50.262420 1609109 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1009 23:18:50.262439 1609109 command_runner.go:130] > NAME="Ubuntu"
	I1009 23:18:50.262446 1609109 command_runner.go:130] > VERSION_ID="22.04"
	I1009 23:18:50.262452 1609109 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1009 23:18:50.262458 1609109 command_runner.go:130] > VERSION_CODENAME=jammy
	I1009 23:18:50.262462 1609109 command_runner.go:130] > ID=ubuntu
	I1009 23:18:50.262467 1609109 command_runner.go:130] > ID_LIKE=debian
	I1009 23:18:50.262474 1609109 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1009 23:18:50.262480 1609109 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1009 23:18:50.262499 1609109 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1009 23:18:50.262510 1609109 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1009 23:18:50.262519 1609109 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1009 23:18:50.262586 1609109 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 23:18:50.262616 1609109 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1009 23:18:50.262630 1609109 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1009 23:18:50.262643 1609109 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1009 23:18:50.262654 1609109 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-1537865/.minikube/addons for local assets ...
	I1009 23:18:50.262713 1609109 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-1537865/.minikube/files for local assets ...
	I1009 23:18:50.262802 1609109 filesync.go:149] local asset: /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem -> 15432152.pem in /etc/ssl/certs
	I1009 23:18:50.262813 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem -> /etc/ssl/certs/15432152.pem
	I1009 23:18:50.262917 1609109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 23:18:50.273481 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem --> /etc/ssl/certs/15432152.pem (1708 bytes)
	I1009 23:18:50.301126 1609109 start.go:303] post-start completed in 159.726635ms
	I1009 23:18:50.301525 1609109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-717678
	I1009 23:18:50.318914 1609109 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/config.json ...
	I1009 23:18:50.319208 1609109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 23:18:50.319259 1609109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678
	I1009 23:18:50.336750 1609109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34434 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678/id_rsa Username:docker}
	I1009 23:18:50.433209 1609109 command_runner.go:130] > 13%!
	(MISSING)I1009 23:18:50.433294 1609109 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 23:18:50.438695 1609109 command_runner.go:130] > 169G
	I1009 23:18:50.438918 1609109 start.go:128] duration metric: createHost completed in 9.645868021s
	I1009 23:18:50.438933 1609109 start.go:83] releasing machines lock for "multinode-717678", held for 9.645999368s
	I1009 23:18:50.439009 1609109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-717678
	I1009 23:18:50.457465 1609109 ssh_runner.go:195] Run: cat /version.json
	I1009 23:18:50.457528 1609109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678
	I1009 23:18:50.457790 1609109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 23:18:50.457870 1609109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678
	I1009 23:18:50.483277 1609109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34434 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678/id_rsa Username:docker}
	I1009 23:18:50.494056 1609109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34434 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678/id_rsa Username:docker}
	I1009 23:18:50.575370 1609109 command_runner.go:130] > {"iso_version": "v1.31.0-1695060926-17240", "kicbase_version": "v0.0.40-1696360059-17345", "minikube_version": "v1.31.2", "commit": "3da829742e24bcb762d99c062a7806436d0f28e3"}
	I1009 23:18:50.575526 1609109 ssh_runner.go:195] Run: systemctl --version
	I1009 23:18:50.725662 1609109 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1009 23:18:50.728871 1609109 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.10)
	I1009 23:18:50.728914 1609109 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1009 23:18:50.728995 1609109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 23:18:50.879996 1609109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 23:18:50.885357 1609109 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1009 23:18:50.885379 1609109 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1009 23:18:50.885396 1609109 command_runner.go:130] > Device: 3ah/58d	Inode: 1304922     Links: 1
	I1009 23:18:50.885404 1609109 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 23:18:50.885414 1609109 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1009 23:18:50.885424 1609109 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1009 23:18:50.885431 1609109 command_runner.go:130] > Change: 2023-10-09 22:55:09.641389644 +0000
	I1009 23:18:50.885439 1609109 command_runner.go:130] >  Birth: 2023-10-09 22:55:09.641389644 +0000
	I1009 23:18:50.885849 1609109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 23:18:50.908838 1609109 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1009 23:18:50.908915 1609109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 23:18:50.946416 1609109 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1009 23:18:50.946504 1609109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1009 23:18:50.946548 1609109 start.go:472] detecting cgroup driver to use...
	I1009 23:18:50.946596 1609109 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1009 23:18:50.946677 1609109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 23:18:50.965217 1609109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 23:18:50.979789 1609109 docker.go:198] disabling cri-docker service (if available) ...
	I1009 23:18:50.979854 1609109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 23:18:50.996586 1609109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 23:18:51.017092 1609109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 23:18:51.113385 1609109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 23:18:51.215862 1609109 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1009 23:18:51.215894 1609109 docker.go:214] disabling docker service ...
	I1009 23:18:51.215951 1609109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 23:18:51.238044 1609109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 23:18:51.252211 1609109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 23:18:51.344194 1609109 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1009 23:18:51.344281 1609109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 23:18:51.442527 1609109 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1009 23:18:51.442637 1609109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 23:18:51.456306 1609109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 23:18:51.475983 1609109 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1009 23:18:51.477361 1609109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1009 23:18:51.477425 1609109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 23:18:51.490027 1609109 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 23:18:51.490100 1609109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 23:18:51.503237 1609109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 23:18:51.515969 1609109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 23:18:51.528418 1609109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 23:18:51.539851 1609109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 23:18:51.549363 1609109 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1009 23:18:51.550679 1609109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 23:18:51.561804 1609109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:18:51.652628 1609109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 23:18:51.786772 1609109 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 23:18:51.786854 1609109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 23:18:51.791711 1609109 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1009 23:18:51.791734 1609109 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1009 23:18:51.791743 1609109 command_runner.go:130] > Device: 44h/68d	Inode: 190         Links: 1
	I1009 23:18:51.791752 1609109 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 23:18:51.791758 1609109 command_runner.go:130] > Access: 2023-10-09 23:18:51.769877778 +0000
	I1009 23:18:51.791766 1609109 command_runner.go:130] > Modify: 2023-10-09 23:18:51.769877778 +0000
	I1009 23:18:51.791772 1609109 command_runner.go:130] > Change: 2023-10-09 23:18:51.769877778 +0000
	I1009 23:18:51.791777 1609109 command_runner.go:130] >  Birth: -
	I1009 23:18:51.791814 1609109 start.go:540] Will wait 60s for crictl version
	I1009 23:18:51.791872 1609109 ssh_runner.go:195] Run: which crictl
	I1009 23:18:51.796288 1609109 command_runner.go:130] > /usr/bin/crictl
	I1009 23:18:51.796388 1609109 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 23:18:51.834115 1609109 command_runner.go:130] > Version:  0.1.0
	I1009 23:18:51.834422 1609109 command_runner.go:130] > RuntimeName:  cri-o
	I1009 23:18:51.834435 1609109 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1009 23:18:51.834443 1609109 command_runner.go:130] > RuntimeApiVersion:  v1
	I1009 23:18:51.837017 1609109 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1009 23:18:51.837105 1609109 ssh_runner.go:195] Run: crio --version
	I1009 23:18:51.878156 1609109 command_runner.go:130] > crio version 1.24.6
	I1009 23:18:51.878178 1609109 command_runner.go:130] > Version:          1.24.6
	I1009 23:18:51.878187 1609109 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1009 23:18:51.878193 1609109 command_runner.go:130] > GitTreeState:     clean
	I1009 23:18:51.878199 1609109 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1009 23:18:51.878206 1609109 command_runner.go:130] > GoVersion:        go1.18.2
	I1009 23:18:51.878211 1609109 command_runner.go:130] > Compiler:         gc
	I1009 23:18:51.878221 1609109 command_runner.go:130] > Platform:         linux/arm64
	I1009 23:18:51.878228 1609109 command_runner.go:130] > Linkmode:         dynamic
	I1009 23:18:51.878241 1609109 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1009 23:18:51.878246 1609109 command_runner.go:130] > SeccompEnabled:   true
	I1009 23:18:51.878254 1609109 command_runner.go:130] > AppArmorEnabled:  false
	I1009 23:18:51.880374 1609109 ssh_runner.go:195] Run: crio --version
	I1009 23:18:51.921538 1609109 command_runner.go:130] > crio version 1.24.6
	I1009 23:18:51.921560 1609109 command_runner.go:130] > Version:          1.24.6
	I1009 23:18:51.921569 1609109 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1009 23:18:51.921574 1609109 command_runner.go:130] > GitTreeState:     clean
	I1009 23:18:51.921581 1609109 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1009 23:18:51.921586 1609109 command_runner.go:130] > GoVersion:        go1.18.2
	I1009 23:18:51.921591 1609109 command_runner.go:130] > Compiler:         gc
	I1009 23:18:51.921597 1609109 command_runner.go:130] > Platform:         linux/arm64
	I1009 23:18:51.921603 1609109 command_runner.go:130] > Linkmode:         dynamic
	I1009 23:18:51.921612 1609109 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1009 23:18:51.921618 1609109 command_runner.go:130] > SeccompEnabled:   true
	I1009 23:18:51.921629 1609109 command_runner.go:130] > AppArmorEnabled:  false
	I1009 23:18:51.926949 1609109 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1009 23:18:51.929199 1609109 cli_runner.go:164] Run: docker network inspect multinode-717678 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 23:18:51.946658 1609109 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1009 23:18:51.951714 1609109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 23:18:51.965640 1609109 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1009 23:18:51.965710 1609109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 23:18:52.036207 1609109 command_runner.go:130] > {
	I1009 23:18:52.036226 1609109 command_runner.go:130] >   "images": [
	I1009 23:18:52.036232 1609109 command_runner.go:130] >     {
	I1009 23:18:52.036242 1609109 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1009 23:18:52.036248 1609109 command_runner.go:130] >       "repoTags": [
	I1009 23:18:52.036256 1609109 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1009 23:18:52.036260 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.036270 1609109 command_runner.go:130] >       "repoDigests": [
	I1009 23:18:52.036280 1609109 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1009 23:18:52.036294 1609109 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1009 23:18:52.036298 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.036304 1609109 command_runner.go:130] >       "size": "60867618",
	I1009 23:18:52.036311 1609109 command_runner.go:130] >       "uid": null,
	I1009 23:18:52.036316 1609109 command_runner.go:130] >       "username": "",
	I1009 23:18:52.036326 1609109 command_runner.go:130] >       "spec": null,
	I1009 23:18:52.036335 1609109 command_runner.go:130] >       "pinned": false
	I1009 23:18:52.036343 1609109 command_runner.go:130] >     },
	I1009 23:18:52.036353 1609109 command_runner.go:130] >     {
	I1009 23:18:52.036362 1609109 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1009 23:18:52.036372 1609109 command_runner.go:130] >       "repoTags": [
	I1009 23:18:52.036379 1609109 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 23:18:52.036391 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.036399 1609109 command_runner.go:130] >       "repoDigests": [
	I1009 23:18:52.036409 1609109 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1009 23:18:52.036419 1609109 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1009 23:18:52.036427 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.036434 1609109 command_runner.go:130] >       "size": "29037500",
	I1009 23:18:52.036439 1609109 command_runner.go:130] >       "uid": null,
	I1009 23:18:52.036449 1609109 command_runner.go:130] >       "username": "",
	I1009 23:18:52.036454 1609109 command_runner.go:130] >       "spec": null,
	I1009 23:18:52.036459 1609109 command_runner.go:130] >       "pinned": false
	I1009 23:18:52.036468 1609109 command_runner.go:130] >     },
	I1009 23:18:52.036473 1609109 command_runner.go:130] >     {
	I1009 23:18:52.036481 1609109 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1009 23:18:52.036488 1609109 command_runner.go:130] >       "repoTags": [
	I1009 23:18:52.036495 1609109 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1009 23:18:52.036500 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.036507 1609109 command_runner.go:130] >       "repoDigests": [
	I1009 23:18:52.036521 1609109 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1009 23:18:52.036531 1609109 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1009 23:18:52.036539 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.036544 1609109 command_runner.go:130] >       "size": "51393451",
	I1009 23:18:52.036550 1609109 command_runner.go:130] >       "uid": null,
	I1009 23:18:52.036558 1609109 command_runner.go:130] >       "username": "",
	I1009 23:18:52.036564 1609109 command_runner.go:130] >       "spec": null,
	I1009 23:18:52.036569 1609109 command_runner.go:130] >       "pinned": false
	I1009 23:18:52.036573 1609109 command_runner.go:130] >     },
	I1009 23:18:52.036580 1609109 command_runner.go:130] >     {
	I1009 23:18:52.036588 1609109 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1009 23:18:52.036593 1609109 command_runner.go:130] >       "repoTags": [
	I1009 23:18:52.036601 1609109 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1009 23:18:52.036609 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.036633 1609109 command_runner.go:130] >       "repoDigests": [
	I1009 23:18:52.036645 1609109 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1009 23:18:52.036655 1609109 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1009 23:18:52.036668 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.036673 1609109 command_runner.go:130] >       "size": "182203183",
	I1009 23:18:52.036678 1609109 command_runner.go:130] >       "uid": {
	I1009 23:18:52.036685 1609109 command_runner.go:130] >         "value": "0"
	I1009 23:18:52.036694 1609109 command_runner.go:130] >       },
	I1009 23:18:52.036699 1609109 command_runner.go:130] >       "username": "",
	I1009 23:18:52.036704 1609109 command_runner.go:130] >       "spec": null,
	I1009 23:18:52.036714 1609109 command_runner.go:130] >       "pinned": false
	I1009 23:18:52.036719 1609109 command_runner.go:130] >     },
	I1009 23:18:52.036724 1609109 command_runner.go:130] >     {
	I1009 23:18:52.036736 1609109 command_runner.go:130] >       "id": "30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c",
	I1009 23:18:52.036741 1609109 command_runner.go:130] >       "repoTags": [
	I1009 23:18:52.036747 1609109 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I1009 23:18:52.036755 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.036761 1609109 command_runner.go:130] >       "repoDigests": [
	I1009 23:18:52.036772 1609109 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d",
	I1009 23:18:52.036784 1609109 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I1009 23:18:52.036789 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.036800 1609109 command_runner.go:130] >       "size": "121054158",
	I1009 23:18:52.036806 1609109 command_runner.go:130] >       "uid": {
	I1009 23:18:52.036816 1609109 command_runner.go:130] >         "value": "0"
	I1009 23:18:52.036821 1609109 command_runner.go:130] >       },
	I1009 23:18:52.036826 1609109 command_runner.go:130] >       "username": "",
	I1009 23:18:52.036831 1609109 command_runner.go:130] >       "spec": null,
	I1009 23:18:52.036840 1609109 command_runner.go:130] >       "pinned": false
	I1009 23:18:52.036844 1609109 command_runner.go:130] >     },
	I1009 23:18:52.036848 1609109 command_runner.go:130] >     {
	I1009 23:18:52.036859 1609109 command_runner.go:130] >       "id": "89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c",
	I1009 23:18:52.036867 1609109 command_runner.go:130] >       "repoTags": [
	I1009 23:18:52.036874 1609109 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I1009 23:18:52.036882 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.036888 1609109 command_runner.go:130] >       "repoDigests": [
	I1009 23:18:52.036897 1609109 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8",
	I1009 23:18:52.036915 1609109 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"
	I1009 23:18:52.036920 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.036928 1609109 command_runner.go:130] >       "size": "117187380",
	I1009 23:18:52.036933 1609109 command_runner.go:130] >       "uid": {
	I1009 23:18:52.036938 1609109 command_runner.go:130] >         "value": "0"
	I1009 23:18:52.036944 1609109 command_runner.go:130] >       },
	I1009 23:18:52.036950 1609109 command_runner.go:130] >       "username": "",
	I1009 23:18:52.036958 1609109 command_runner.go:130] >       "spec": null,
	I1009 23:18:52.036964 1609109 command_runner.go:130] >       "pinned": false
	I1009 23:18:52.036968 1609109 command_runner.go:130] >     },
	I1009 23:18:52.036976 1609109 command_runner.go:130] >     {
	I1009 23:18:52.036985 1609109 command_runner.go:130] >       "id": "7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa",
	I1009 23:18:52.036995 1609109 command_runner.go:130] >       "repoTags": [
	I1009 23:18:52.037002 1609109 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I1009 23:18:52.037006 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.037012 1609109 command_runner.go:130] >       "repoDigests": [
	I1009 23:18:52.037024 1609109 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf",
	I1009 23:18:52.037033 1609109 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8"
	I1009 23:18:52.037043 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.037048 1609109 command_runner.go:130] >       "size": "69926807",
	I1009 23:18:52.037054 1609109 command_runner.go:130] >       "uid": null,
	I1009 23:18:52.037062 1609109 command_runner.go:130] >       "username": "",
	I1009 23:18:52.037067 1609109 command_runner.go:130] >       "spec": null,
	I1009 23:18:52.037073 1609109 command_runner.go:130] >       "pinned": false
	I1009 23:18:52.037081 1609109 command_runner.go:130] >     },
	I1009 23:18:52.037085 1609109 command_runner.go:130] >     {
	I1009 23:18:52.037094 1609109 command_runner.go:130] >       "id": "64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7",
	I1009 23:18:52.037099 1609109 command_runner.go:130] >       "repoTags": [
	I1009 23:18:52.037107 1609109 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I1009 23:18:52.037112 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.037117 1609109 command_runner.go:130] >       "repoDigests": [
	I1009 23:18:52.037170 1609109 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I1009 23:18:52.037183 1609109 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88"
	I1009 23:18:52.037187 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.037192 1609109 command_runner.go:130] >       "size": "59188020",
	I1009 23:18:52.037197 1609109 command_runner.go:130] >       "uid": {
	I1009 23:18:52.037205 1609109 command_runner.go:130] >         "value": "0"
	I1009 23:18:52.037209 1609109 command_runner.go:130] >       },
	I1009 23:18:52.037214 1609109 command_runner.go:130] >       "username": "",
	I1009 23:18:52.037219 1609109 command_runner.go:130] >       "spec": null,
	I1009 23:18:52.037224 1609109 command_runner.go:130] >       "pinned": false
	I1009 23:18:52.037229 1609109 command_runner.go:130] >     },
	I1009 23:18:52.037236 1609109 command_runner.go:130] >     {
	I1009 23:18:52.037244 1609109 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1009 23:18:52.037255 1609109 command_runner.go:130] >       "repoTags": [
	I1009 23:18:52.037260 1609109 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1009 23:18:52.037265 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.037274 1609109 command_runner.go:130] >       "repoDigests": [
	I1009 23:18:52.037284 1609109 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1009 23:18:52.037295 1609109 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1009 23:18:52.037300 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.037306 1609109 command_runner.go:130] >       "size": "520014",
	I1009 23:18:52.037312 1609109 command_runner.go:130] >       "uid": {
	I1009 23:18:52.037318 1609109 command_runner.go:130] >         "value": "65535"
	I1009 23:18:52.037325 1609109 command_runner.go:130] >       },
	I1009 23:18:52.037333 1609109 command_runner.go:130] >       "username": "",
	I1009 23:18:52.037338 1609109 command_runner.go:130] >       "spec": null,
	I1009 23:18:52.037343 1609109 command_runner.go:130] >       "pinned": false
	I1009 23:18:52.037351 1609109 command_runner.go:130] >     }
	I1009 23:18:52.037355 1609109 command_runner.go:130] >   ]
	I1009 23:18:52.037360 1609109 command_runner.go:130] > }
	I1009 23:18:52.038586 1609109 crio.go:496] all images are preloaded for cri-o runtime.
	I1009 23:18:52.038626 1609109 crio.go:415] Images already preloaded, skipping extraction
	I1009 23:18:52.038681 1609109 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 23:18:52.080588 1609109 command_runner.go:130] > {
	I1009 23:18:52.080607 1609109 command_runner.go:130] >   "images": [
	I1009 23:18:52.080613 1609109 command_runner.go:130] >     {
	I1009 23:18:52.080623 1609109 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1009 23:18:52.080629 1609109 command_runner.go:130] >       "repoTags": [
	I1009 23:18:52.080640 1609109 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1009 23:18:52.080645 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.080650 1609109 command_runner.go:130] >       "repoDigests": [
	I1009 23:18:52.080669 1609109 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1009 23:18:52.080678 1609109 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1009 23:18:52.080683 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.080691 1609109 command_runner.go:130] >       "size": "60867618",
	I1009 23:18:52.080696 1609109 command_runner.go:130] >       "uid": null,
	I1009 23:18:52.080701 1609109 command_runner.go:130] >       "username": "",
	I1009 23:18:52.080707 1609109 command_runner.go:130] >       "spec": null,
	I1009 23:18:52.080712 1609109 command_runner.go:130] >       "pinned": false
	I1009 23:18:52.080716 1609109 command_runner.go:130] >     },
	I1009 23:18:52.080720 1609109 command_runner.go:130] >     {
	I1009 23:18:52.080728 1609109 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1009 23:18:52.080733 1609109 command_runner.go:130] >       "repoTags": [
	I1009 23:18:52.080739 1609109 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1009 23:18:52.080743 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.080751 1609109 command_runner.go:130] >       "repoDigests": [
	I1009 23:18:52.080760 1609109 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1009 23:18:52.080770 1609109 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1009 23:18:52.080774 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.080783 1609109 command_runner.go:130] >       "size": "29037500",
	I1009 23:18:52.080788 1609109 command_runner.go:130] >       "uid": null,
	I1009 23:18:52.080793 1609109 command_runner.go:130] >       "username": "",
	I1009 23:18:52.080800 1609109 command_runner.go:130] >       "spec": null,
	I1009 23:18:52.080805 1609109 command_runner.go:130] >       "pinned": false
	I1009 23:18:52.080809 1609109 command_runner.go:130] >     },
	I1009 23:18:52.080813 1609109 command_runner.go:130] >     {
	I1009 23:18:52.080821 1609109 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1009 23:18:52.080826 1609109 command_runner.go:130] >       "repoTags": [
	I1009 23:18:52.080832 1609109 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1009 23:18:52.080836 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.080841 1609109 command_runner.go:130] >       "repoDigests": [
	I1009 23:18:52.080850 1609109 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1009 23:18:52.080860 1609109 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1009 23:18:52.080866 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.080872 1609109 command_runner.go:130] >       "size": "51393451",
	I1009 23:18:52.080877 1609109 command_runner.go:130] >       "uid": null,
	I1009 23:18:52.080882 1609109 command_runner.go:130] >       "username": "",
	I1009 23:18:52.080887 1609109 command_runner.go:130] >       "spec": null,
	I1009 23:18:52.080893 1609109 command_runner.go:130] >       "pinned": false
	I1009 23:18:52.080897 1609109 command_runner.go:130] >     },
	I1009 23:18:52.080904 1609109 command_runner.go:130] >     {
	I1009 23:18:52.080912 1609109 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1009 23:18:52.080917 1609109 command_runner.go:130] >       "repoTags": [
	I1009 23:18:52.080923 1609109 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1009 23:18:52.080927 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.080932 1609109 command_runner.go:130] >       "repoDigests": [
	I1009 23:18:52.080941 1609109 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1009 23:18:52.080950 1609109 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1009 23:18:52.080966 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.080972 1609109 command_runner.go:130] >       "size": "182203183",
	I1009 23:18:52.080976 1609109 command_runner.go:130] >       "uid": {
	I1009 23:18:52.080981 1609109 command_runner.go:130] >         "value": "0"
	I1009 23:18:52.080986 1609109 command_runner.go:130] >       },
	I1009 23:18:52.080991 1609109 command_runner.go:130] >       "username": "",
	I1009 23:18:52.080995 1609109 command_runner.go:130] >       "spec": null,
	I1009 23:18:52.081000 1609109 command_runner.go:130] >       "pinned": false
	I1009 23:18:52.081004 1609109 command_runner.go:130] >     },
	I1009 23:18:52.081009 1609109 command_runner.go:130] >     {
	I1009 23:18:52.081018 1609109 command_runner.go:130] >       "id": "30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c",
	I1009 23:18:52.081023 1609109 command_runner.go:130] >       "repoTags": [
	I1009 23:18:52.081029 1609109 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I1009 23:18:52.081036 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.081040 1609109 command_runner.go:130] >       "repoDigests": [
	I1009 23:18:52.081052 1609109 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d",
	I1009 23:18:52.081061 1609109 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I1009 23:18:52.081066 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.081071 1609109 command_runner.go:130] >       "size": "121054158",
	I1009 23:18:52.081075 1609109 command_runner.go:130] >       "uid": {
	I1009 23:18:52.081080 1609109 command_runner.go:130] >         "value": "0"
	I1009 23:18:52.081084 1609109 command_runner.go:130] >       },
	I1009 23:18:52.081089 1609109 command_runner.go:130] >       "username": "",
	I1009 23:18:52.081094 1609109 command_runner.go:130] >       "spec": null,
	I1009 23:18:52.081099 1609109 command_runner.go:130] >       "pinned": false
	I1009 23:18:52.081103 1609109 command_runner.go:130] >     },
	I1009 23:18:52.081107 1609109 command_runner.go:130] >     {
	I1009 23:18:52.081114 1609109 command_runner.go:130] >       "id": "89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c",
	I1009 23:18:52.081122 1609109 command_runner.go:130] >       "repoTags": [
	I1009 23:18:52.081128 1609109 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I1009 23:18:52.081133 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.081138 1609109 command_runner.go:130] >       "repoDigests": [
	I1009 23:18:52.081147 1609109 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8",
	I1009 23:18:52.081156 1609109 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"
	I1009 23:18:52.081161 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.081168 1609109 command_runner.go:130] >       "size": "117187380",
	I1009 23:18:52.081173 1609109 command_runner.go:130] >       "uid": {
	I1009 23:18:52.081178 1609109 command_runner.go:130] >         "value": "0"
	I1009 23:18:52.081182 1609109 command_runner.go:130] >       },
	I1009 23:18:52.081186 1609109 command_runner.go:130] >       "username": "",
	I1009 23:18:52.081191 1609109 command_runner.go:130] >       "spec": null,
	I1009 23:18:52.081196 1609109 command_runner.go:130] >       "pinned": false
	I1009 23:18:52.081200 1609109 command_runner.go:130] >     },
	I1009 23:18:52.081204 1609109 command_runner.go:130] >     {
	I1009 23:18:52.081211 1609109 command_runner.go:130] >       "id": "7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa",
	I1009 23:18:52.081216 1609109 command_runner.go:130] >       "repoTags": [
	I1009 23:18:52.081224 1609109 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I1009 23:18:52.081229 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.081234 1609109 command_runner.go:130] >       "repoDigests": [
	I1009 23:18:52.081242 1609109 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf",
	I1009 23:18:52.081251 1609109 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8"
	I1009 23:18:52.081256 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.081261 1609109 command_runner.go:130] >       "size": "69926807",
	I1009 23:18:52.081265 1609109 command_runner.go:130] >       "uid": null,
	I1009 23:18:52.081273 1609109 command_runner.go:130] >       "username": "",
	I1009 23:18:52.081278 1609109 command_runner.go:130] >       "spec": null,
	I1009 23:18:52.081282 1609109 command_runner.go:130] >       "pinned": false
	I1009 23:18:52.081287 1609109 command_runner.go:130] >     },
	I1009 23:18:52.081291 1609109 command_runner.go:130] >     {
	I1009 23:18:52.081300 1609109 command_runner.go:130] >       "id": "64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7",
	I1009 23:18:52.081305 1609109 command_runner.go:130] >       "repoTags": [
	I1009 23:18:52.081311 1609109 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I1009 23:18:52.081315 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.081319 1609109 command_runner.go:130] >       "repoDigests": [
	I1009 23:18:52.081342 1609109 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I1009 23:18:52.081352 1609109 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88"
	I1009 23:18:52.081356 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.081361 1609109 command_runner.go:130] >       "size": "59188020",
	I1009 23:18:52.081366 1609109 command_runner.go:130] >       "uid": {
	I1009 23:18:52.081370 1609109 command_runner.go:130] >         "value": "0"
	I1009 23:18:52.081375 1609109 command_runner.go:130] >       },
	I1009 23:18:52.081380 1609109 command_runner.go:130] >       "username": "",
	I1009 23:18:52.081384 1609109 command_runner.go:130] >       "spec": null,
	I1009 23:18:52.081389 1609109 command_runner.go:130] >       "pinned": false
	I1009 23:18:52.081393 1609109 command_runner.go:130] >     },
	I1009 23:18:52.081397 1609109 command_runner.go:130] >     {
	I1009 23:18:52.081404 1609109 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1009 23:18:52.081409 1609109 command_runner.go:130] >       "repoTags": [
	I1009 23:18:52.081414 1609109 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1009 23:18:52.081418 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.081423 1609109 command_runner.go:130] >       "repoDigests": [
	I1009 23:18:52.081432 1609109 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1009 23:18:52.081443 1609109 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1009 23:18:52.081447 1609109 command_runner.go:130] >       ],
	I1009 23:18:52.081452 1609109 command_runner.go:130] >       "size": "520014",
	I1009 23:18:52.081457 1609109 command_runner.go:130] >       "uid": {
	I1009 23:18:52.081461 1609109 command_runner.go:130] >         "value": "65535"
	I1009 23:18:52.081465 1609109 command_runner.go:130] >       },
	I1009 23:18:52.081471 1609109 command_runner.go:130] >       "username": "",
	I1009 23:18:52.081476 1609109 command_runner.go:130] >       "spec": null,
	I1009 23:18:52.081481 1609109 command_runner.go:130] >       "pinned": false
	I1009 23:18:52.081485 1609109 command_runner.go:130] >     }
	I1009 23:18:52.081489 1609109 command_runner.go:130] >   ]
	I1009 23:18:52.081493 1609109 command_runner.go:130] > }
	I1009 23:18:52.081622 1609109 crio.go:496] all images are preloaded for cri-o runtime.
	I1009 23:18:52.081630 1609109 cache_images.go:84] Images are preloaded, skipping loading
	I1009 23:18:52.081701 1609109 ssh_runner.go:195] Run: crio config
	I1009 23:18:52.137707 1609109 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1009 23:18:52.137745 1609109 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1009 23:18:52.137754 1609109 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1009 23:18:52.137758 1609109 command_runner.go:130] > #
	I1009 23:18:52.137767 1609109 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1009 23:18:52.137775 1609109 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1009 23:18:52.137783 1609109 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1009 23:18:52.137794 1609109 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1009 23:18:52.137798 1609109 command_runner.go:130] > # reload'.
	I1009 23:18:52.137806 1609109 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1009 23:18:52.137814 1609109 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1009 23:18:52.137822 1609109 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1009 23:18:52.137829 1609109 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1009 23:18:52.137833 1609109 command_runner.go:130] > [crio]
	I1009 23:18:52.137841 1609109 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1009 23:18:52.137847 1609109 command_runner.go:130] > # containers images, in this directory.
	I1009 23:18:52.137855 1609109 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1009 23:18:52.137863 1609109 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1009 23:18:52.137869 1609109 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1009 23:18:52.137878 1609109 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1009 23:18:52.137886 1609109 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1009 23:18:52.137891 1609109 command_runner.go:130] > # storage_driver = "vfs"
	I1009 23:18:52.137898 1609109 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1009 23:18:52.137905 1609109 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1009 23:18:52.137909 1609109 command_runner.go:130] > # storage_option = [
	I1009 23:18:52.137913 1609109 command_runner.go:130] > # ]
	I1009 23:18:52.137921 1609109 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1009 23:18:52.137928 1609109 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1009 23:18:52.137934 1609109 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1009 23:18:52.137940 1609109 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1009 23:18:52.137947 1609109 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1009 23:18:52.137953 1609109 command_runner.go:130] > # always happen on a node reboot
	I1009 23:18:52.137958 1609109 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1009 23:18:52.137965 1609109 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1009 23:18:52.137972 1609109 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1009 23:18:52.137986 1609109 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1009 23:18:52.137993 1609109 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1009 23:18:52.138006 1609109 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1009 23:18:52.138016 1609109 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1009 23:18:52.138021 1609109 command_runner.go:130] > # internal_wipe = true
	I1009 23:18:52.138027 1609109 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1009 23:18:52.138035 1609109 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1009 23:18:52.138041 1609109 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1009 23:18:52.138048 1609109 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1009 23:18:52.138057 1609109 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1009 23:18:52.138061 1609109 command_runner.go:130] > [crio.api]
	I1009 23:18:52.138068 1609109 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1009 23:18:52.138073 1609109 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1009 23:18:52.138079 1609109 command_runner.go:130] > # IP address on which the stream server will listen.
	I1009 23:18:52.138084 1609109 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1009 23:18:52.138092 1609109 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1009 23:18:52.138098 1609109 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1009 23:18:52.138103 1609109 command_runner.go:130] > # stream_port = "0"
	I1009 23:18:52.138109 1609109 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1009 23:18:52.138446 1609109 command_runner.go:130] > # stream_enable_tls = false
	I1009 23:18:52.138492 1609109 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1009 23:18:52.138562 1609109 command_runner.go:130] > # stream_idle_timeout = ""
	I1009 23:18:52.138575 1609109 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1009 23:18:52.138583 1609109 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1009 23:18:52.138587 1609109 command_runner.go:130] > # minutes.
	I1009 23:18:52.138593 1609109 command_runner.go:130] > # stream_tls_cert = ""
	I1009 23:18:52.138600 1609109 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1009 23:18:52.138607 1609109 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1009 23:18:52.138612 1609109 command_runner.go:130] > # stream_tls_key = ""
	I1009 23:18:52.138619 1609109 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1009 23:18:52.138627 1609109 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1009 23:18:52.138633 1609109 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1009 23:18:52.138638 1609109 command_runner.go:130] > # stream_tls_ca = ""
	I1009 23:18:52.138647 1609109 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1009 23:18:52.138653 1609109 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1009 23:18:52.138662 1609109 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1009 23:18:52.138667 1609109 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1009 23:18:52.138692 1609109 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1009 23:18:52.138699 1609109 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1009 23:18:52.138705 1609109 command_runner.go:130] > [crio.runtime]
	I1009 23:18:52.138712 1609109 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1009 23:18:52.138719 1609109 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1009 23:18:52.138727 1609109 command_runner.go:130] > # "nofile=1024:2048"
	I1009 23:18:52.138734 1609109 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1009 23:18:52.138739 1609109 command_runner.go:130] > # default_ulimits = [
	I1009 23:18:52.138743 1609109 command_runner.go:130] > # ]
	I1009 23:18:52.138751 1609109 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1009 23:18:52.138758 1609109 command_runner.go:130] > # no_pivot = false
	I1009 23:18:52.138766 1609109 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1009 23:18:52.138773 1609109 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1009 23:18:52.138779 1609109 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1009 23:18:52.138786 1609109 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1009 23:18:52.138792 1609109 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1009 23:18:52.138800 1609109 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 23:18:52.138805 1609109 command_runner.go:130] > # conmon = ""
	I1009 23:18:52.138810 1609109 command_runner.go:130] > # Cgroup setting for conmon
	I1009 23:18:52.138818 1609109 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1009 23:18:52.138825 1609109 command_runner.go:130] > conmon_cgroup = "pod"
	I1009 23:18:52.138832 1609109 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1009 23:18:52.138840 1609109 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1009 23:18:52.138848 1609109 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 23:18:52.138852 1609109 command_runner.go:130] > # conmon_env = [
	I1009 23:18:52.138856 1609109 command_runner.go:130] > # ]
	I1009 23:18:52.138863 1609109 command_runner.go:130] > # Additional environment variables to set for all the
	I1009 23:18:52.138869 1609109 command_runner.go:130] > # containers. These are overridden if set in the
	I1009 23:18:52.138876 1609109 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1009 23:18:52.138881 1609109 command_runner.go:130] > # default_env = [
	I1009 23:18:52.138885 1609109 command_runner.go:130] > # ]
	I1009 23:18:52.138891 1609109 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1009 23:18:52.138896 1609109 command_runner.go:130] > # selinux = false
	I1009 23:18:52.138903 1609109 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1009 23:18:52.138911 1609109 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1009 23:18:52.138917 1609109 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1009 23:18:52.138922 1609109 command_runner.go:130] > # seccomp_profile = ""
	I1009 23:18:52.138929 1609109 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1009 23:18:52.138937 1609109 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1009 23:18:52.138945 1609109 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1009 23:18:52.138950 1609109 command_runner.go:130] > # which might increase security.
	I1009 23:18:52.138956 1609109 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1009 23:18:52.138963 1609109 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1009 23:18:52.138971 1609109 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1009 23:18:52.138978 1609109 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1009 23:18:52.138987 1609109 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1009 23:18:52.138993 1609109 command_runner.go:130] > # This option supports live configuration reload.
	I1009 23:18:52.138998 1609109 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1009 23:18:52.139021 1609109 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1009 23:18:52.139027 1609109 command_runner.go:130] > # the cgroup blockio controller.
	I1009 23:18:52.139032 1609109 command_runner.go:130] > # blockio_config_file = ""
	I1009 23:18:52.139039 1609109 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1009 23:18:52.139044 1609109 command_runner.go:130] > # irqbalance daemon.
	I1009 23:18:52.139051 1609109 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1009 23:18:52.139058 1609109 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1009 23:18:52.139064 1609109 command_runner.go:130] > # This option supports live configuration reload.
	I1009 23:18:52.139071 1609109 command_runner.go:130] > # rdt_config_file = ""
	I1009 23:18:52.139077 1609109 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1009 23:18:52.139083 1609109 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1009 23:18:52.139090 1609109 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1009 23:18:52.139095 1609109 command_runner.go:130] > # separate_pull_cgroup = ""
	I1009 23:18:52.139157 1609109 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1009 23:18:52.139166 1609109 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1009 23:18:52.139171 1609109 command_runner.go:130] > # will be added.
	I1009 23:18:52.139176 1609109 command_runner.go:130] > # default_capabilities = [
	I1009 23:18:52.139180 1609109 command_runner.go:130] > # 	"CHOWN",
	I1009 23:18:52.139185 1609109 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1009 23:18:52.139189 1609109 command_runner.go:130] > # 	"FSETID",
	I1009 23:18:52.139194 1609109 command_runner.go:130] > # 	"FOWNER",
	I1009 23:18:52.139199 1609109 command_runner.go:130] > # 	"SETGID",
	I1009 23:18:52.139208 1609109 command_runner.go:130] > # 	"SETUID",
	I1009 23:18:52.139213 1609109 command_runner.go:130] > # 	"SETPCAP",
	I1009 23:18:52.139217 1609109 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1009 23:18:52.139222 1609109 command_runner.go:130] > # 	"KILL",
	I1009 23:18:52.139229 1609109 command_runner.go:130] > # ]
	I1009 23:18:52.139238 1609109 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1009 23:18:52.139246 1609109 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1009 23:18:52.139251 1609109 command_runner.go:130] > # add_inheritable_capabilities = true
	I1009 23:18:52.139258 1609109 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1009 23:18:52.139265 1609109 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 23:18:52.139270 1609109 command_runner.go:130] > # default_sysctls = [
	I1009 23:18:52.139274 1609109 command_runner.go:130] > # ]
	I1009 23:18:52.139279 1609109 command_runner.go:130] > # List of devices on the host that a
	I1009 23:18:52.139287 1609109 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1009 23:18:52.139291 1609109 command_runner.go:130] > # allowed_devices = [
	I1009 23:18:52.139296 1609109 command_runner.go:130] > # 	"/dev/fuse",
	I1009 23:18:52.139300 1609109 command_runner.go:130] > # ]
	I1009 23:18:52.139305 1609109 command_runner.go:130] > # List of additional devices. specified as
	I1009 23:18:52.139330 1609109 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1009 23:18:52.139337 1609109 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1009 23:18:52.139344 1609109 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 23:18:52.139350 1609109 command_runner.go:130] > # additional_devices = [
	I1009 23:18:52.139356 1609109 command_runner.go:130] > # ]
	I1009 23:18:52.139362 1609109 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1009 23:18:52.139367 1609109 command_runner.go:130] > # cdi_spec_dirs = [
	I1009 23:18:52.139371 1609109 command_runner.go:130] > # 	"/etc/cdi",
	I1009 23:18:52.139376 1609109 command_runner.go:130] > # 	"/var/run/cdi",
	I1009 23:18:52.139380 1609109 command_runner.go:130] > # ]
	I1009 23:18:52.139387 1609109 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1009 23:18:52.139395 1609109 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1009 23:18:52.139399 1609109 command_runner.go:130] > # Defaults to false.
	I1009 23:18:52.139405 1609109 command_runner.go:130] > # device_ownership_from_security_context = false
	I1009 23:18:52.139413 1609109 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1009 23:18:52.139421 1609109 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1009 23:18:52.139426 1609109 command_runner.go:130] > # hooks_dir = [
	I1009 23:18:52.139431 1609109 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1009 23:18:52.139435 1609109 command_runner.go:130] > # ]
	I1009 23:18:52.139443 1609109 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1009 23:18:52.139450 1609109 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1009 23:18:52.139456 1609109 command_runner.go:130] > # its default mounts from the following two files:
	I1009 23:18:52.139461 1609109 command_runner.go:130] > #
	I1009 23:18:52.139468 1609109 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1009 23:18:52.139476 1609109 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1009 23:18:52.139482 1609109 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1009 23:18:52.139486 1609109 command_runner.go:130] > #
	I1009 23:18:52.139493 1609109 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1009 23:18:52.139501 1609109 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1009 23:18:52.139512 1609109 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1009 23:18:52.139519 1609109 command_runner.go:130] > #      only add mounts it finds in this file.
	I1009 23:18:52.139522 1609109 command_runner.go:130] > #
	I1009 23:18:52.139527 1609109 command_runner.go:130] > # default_mounts_file = ""
	I1009 23:18:52.139533 1609109 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1009 23:18:52.139545 1609109 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1009 23:18:52.139552 1609109 command_runner.go:130] > # pids_limit = 0
	I1009 23:18:52.139559 1609109 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1009 23:18:52.139567 1609109 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1009 23:18:52.139574 1609109 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1009 23:18:52.139584 1609109 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1009 23:18:52.139590 1609109 command_runner.go:130] > # log_size_max = -1
	I1009 23:18:52.139599 1609109 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1009 23:18:52.139605 1609109 command_runner.go:130] > # log_to_journald = false
	I1009 23:18:52.139612 1609109 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1009 23:18:52.139620 1609109 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1009 23:18:52.139626 1609109 command_runner.go:130] > # Path to directory for container attach sockets.
	I1009 23:18:52.139632 1609109 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1009 23:18:52.139638 1609109 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1009 23:18:52.139643 1609109 command_runner.go:130] > # bind_mount_prefix = ""
	I1009 23:18:52.139649 1609109 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1009 23:18:52.139654 1609109 command_runner.go:130] > # read_only = false
	I1009 23:18:52.139661 1609109 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1009 23:18:52.139669 1609109 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1009 23:18:52.139674 1609109 command_runner.go:130] > # live configuration reload.
	I1009 23:18:52.139679 1609109 command_runner.go:130] > # log_level = "info"
	I1009 23:18:52.139686 1609109 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1009 23:18:52.139692 1609109 command_runner.go:130] > # This option supports live configuration reload.
	I1009 23:18:52.140124 1609109 command_runner.go:130] > # log_filter = ""
	I1009 23:18:52.140142 1609109 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1009 23:18:52.140150 1609109 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1009 23:18:52.140155 1609109 command_runner.go:130] > # separated by comma.
	I1009 23:18:52.140160 1609109 command_runner.go:130] > # uid_mappings = ""
	I1009 23:18:52.140167 1609109 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1009 23:18:52.140174 1609109 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1009 23:18:52.140179 1609109 command_runner.go:130] > # separated by comma.
	I1009 23:18:52.140184 1609109 command_runner.go:130] > # gid_mappings = ""
	I1009 23:18:52.140191 1609109 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1009 23:18:52.140198 1609109 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 23:18:52.140206 1609109 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 23:18:52.140211 1609109 command_runner.go:130] > # minimum_mappable_uid = -1
	I1009 23:18:52.140219 1609109 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1009 23:18:52.140226 1609109 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 23:18:52.140233 1609109 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 23:18:52.140239 1609109 command_runner.go:130] > # minimum_mappable_gid = -1
	I1009 23:18:52.140246 1609109 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1009 23:18:52.140253 1609109 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1009 23:18:52.140263 1609109 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1009 23:18:52.140268 1609109 command_runner.go:130] > # ctr_stop_timeout = 30
	I1009 23:18:52.140275 1609109 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1009 23:18:52.140283 1609109 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1009 23:18:52.140290 1609109 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1009 23:18:52.140296 1609109 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1009 23:18:52.140301 1609109 command_runner.go:130] > # drop_infra_ctr = true
	I1009 23:18:52.140308 1609109 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1009 23:18:52.140315 1609109 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1009 23:18:52.140323 1609109 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1009 23:18:52.140328 1609109 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1009 23:18:52.140336 1609109 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1009 23:18:52.140342 1609109 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1009 23:18:52.140347 1609109 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1009 23:18:52.140355 1609109 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1009 23:18:52.140360 1609109 command_runner.go:130] > # pinns_path = ""
	I1009 23:18:52.140367 1609109 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1009 23:18:52.140375 1609109 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1009 23:18:52.140385 1609109 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1009 23:18:52.140390 1609109 command_runner.go:130] > # default_runtime = "runc"
	I1009 23:18:52.140396 1609109 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1009 23:18:52.140405 1609109 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1009 23:18:52.140416 1609109 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1009 23:18:52.140422 1609109 command_runner.go:130] > # creation as a file is not desired either.
	I1009 23:18:52.140435 1609109 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1009 23:18:52.140441 1609109 command_runner.go:130] > # the hostname is being managed dynamically.
	I1009 23:18:52.140446 1609109 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1009 23:18:52.140450 1609109 command_runner.go:130] > # ]
	I1009 23:18:52.140458 1609109 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1009 23:18:52.140465 1609109 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1009 23:18:52.140473 1609109 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1009 23:18:52.140481 1609109 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1009 23:18:52.140484 1609109 command_runner.go:130] > #
	I1009 23:18:52.140490 1609109 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1009 23:18:52.140496 1609109 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1009 23:18:52.140501 1609109 command_runner.go:130] > #  runtime_type = "oci"
	I1009 23:18:52.140508 1609109 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1009 23:18:52.140514 1609109 command_runner.go:130] > #  privileged_without_host_devices = false
	I1009 23:18:52.140519 1609109 command_runner.go:130] > #  allowed_annotations = []
	I1009 23:18:52.140524 1609109 command_runner.go:130] > # Where:
	I1009 23:18:52.140530 1609109 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1009 23:18:52.140539 1609109 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1009 23:18:52.140547 1609109 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1009 23:18:52.140554 1609109 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1009 23:18:52.140559 1609109 command_runner.go:130] > #   in $PATH.
	I1009 23:18:52.140566 1609109 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1009 23:18:52.140572 1609109 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1009 23:18:52.140579 1609109 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1009 23:18:52.140584 1609109 command_runner.go:130] > #   state.
	I1009 23:18:52.140591 1609109 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1009 23:18:52.140598 1609109 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1009 23:18:52.140606 1609109 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1009 23:18:52.140631 1609109 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1009 23:18:52.140640 1609109 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1009 23:18:52.140653 1609109 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1009 23:18:52.140659 1609109 command_runner.go:130] > #   The currently recognized values are:
	I1009 23:18:52.140667 1609109 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1009 23:18:52.140675 1609109 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1009 23:18:52.140682 1609109 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1009 23:18:52.140690 1609109 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1009 23:18:52.140699 1609109 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1009 23:18:52.140706 1609109 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1009 23:18:52.140714 1609109 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1009 23:18:52.140722 1609109 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1009 23:18:52.140728 1609109 command_runner.go:130] > #   should be moved to the container's cgroup
	I1009 23:18:52.140733 1609109 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1009 23:18:52.140739 1609109 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1009 23:18:52.140745 1609109 command_runner.go:130] > runtime_type = "oci"
	I1009 23:18:52.140750 1609109 command_runner.go:130] > runtime_root = "/run/runc"
	I1009 23:18:52.140755 1609109 command_runner.go:130] > runtime_config_path = ""
	I1009 23:18:52.140759 1609109 command_runner.go:130] > monitor_path = ""
	I1009 23:18:52.140764 1609109 command_runner.go:130] > monitor_cgroup = ""
	I1009 23:18:52.140771 1609109 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 23:18:52.140807 1609109 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1009 23:18:52.140812 1609109 command_runner.go:130] > # running containers
	I1009 23:18:52.140818 1609109 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1009 23:18:52.140825 1609109 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1009 23:18:52.140835 1609109 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1009 23:18:52.140842 1609109 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1009 23:18:52.140848 1609109 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1009 23:18:52.140854 1609109 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1009 23:18:52.140859 1609109 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1009 23:18:52.140865 1609109 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1009 23:18:52.140873 1609109 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1009 23:18:52.140878 1609109 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1009 23:18:52.140886 1609109 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1009 23:18:52.140892 1609109 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1009 23:18:52.140900 1609109 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1009 23:18:52.140909 1609109 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1009 23:18:52.140918 1609109 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1009 23:18:52.140928 1609109 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1009 23:18:52.140940 1609109 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1009 23:18:52.140949 1609109 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1009 23:18:52.140956 1609109 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1009 23:18:52.140965 1609109 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1009 23:18:52.140972 1609109 command_runner.go:130] > # Example:
	I1009 23:18:52.140978 1609109 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1009 23:18:52.140984 1609109 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1009 23:18:52.140990 1609109 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1009 23:18:52.141004 1609109 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1009 23:18:52.141009 1609109 command_runner.go:130] > # cpuset = 0
	I1009 23:18:52.141013 1609109 command_runner.go:130] > # cpushares = "0-1"
	I1009 23:18:52.141018 1609109 command_runner.go:130] > # Where:
	I1009 23:18:52.141024 1609109 command_runner.go:130] > # The workload name is workload-type.
	I1009 23:18:52.141034 1609109 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1009 23:18:52.141041 1609109 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1009 23:18:52.141047 1609109 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1009 23:18:52.141057 1609109 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1009 23:18:52.141066 1609109 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1009 23:18:52.141070 1609109 command_runner.go:130] > # 
	I1009 23:18:52.141078 1609109 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1009 23:18:52.141082 1609109 command_runner.go:130] > #
	I1009 23:18:52.141090 1609109 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1009 23:18:52.141098 1609109 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1009 23:18:52.141106 1609109 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1009 23:18:52.141114 1609109 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1009 23:18:52.141121 1609109 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1009 23:18:52.141125 1609109 command_runner.go:130] > [crio.image]
	I1009 23:18:52.141132 1609109 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1009 23:18:52.141138 1609109 command_runner.go:130] > # default_transport = "docker://"
	I1009 23:18:52.141145 1609109 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1009 23:18:52.141175 1609109 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1009 23:18:52.141180 1609109 command_runner.go:130] > # global_auth_file = ""
	I1009 23:18:52.141186 1609109 command_runner.go:130] > # The image used to instantiate infra containers.
	I1009 23:18:52.141192 1609109 command_runner.go:130] > # This option supports live configuration reload.
	I1009 23:18:52.141199 1609109 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1009 23:18:52.141209 1609109 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1009 23:18:52.141216 1609109 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1009 23:18:52.141222 1609109 command_runner.go:130] > # This option supports live configuration reload.
	I1009 23:18:52.141227 1609109 command_runner.go:130] > # pause_image_auth_file = ""
	I1009 23:18:52.141234 1609109 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1009 23:18:52.141242 1609109 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1009 23:18:52.141249 1609109 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1009 23:18:52.141256 1609109 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1009 23:18:52.141261 1609109 command_runner.go:130] > # pause_command = "/pause"
	I1009 23:18:52.141268 1609109 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1009 23:18:52.141276 1609109 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1009 23:18:52.141284 1609109 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1009 23:18:52.141291 1609109 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1009 23:18:52.141297 1609109 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1009 23:18:52.141302 1609109 command_runner.go:130] > # signature_policy = ""
	I1009 23:18:52.141311 1609109 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1009 23:18:52.141318 1609109 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1009 23:18:52.141323 1609109 command_runner.go:130] > # changing them here.
	I1009 23:18:52.141330 1609109 command_runner.go:130] > # insecure_registries = [
	I1009 23:18:52.141334 1609109 command_runner.go:130] > # ]
	I1009 23:18:52.141342 1609109 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1009 23:18:52.141348 1609109 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1009 23:18:52.141354 1609109 command_runner.go:130] > # image_volumes = "mkdir"
	I1009 23:18:52.141360 1609109 command_runner.go:130] > # Temporary directory to use for storing big files
	I1009 23:18:52.141365 1609109 command_runner.go:130] > # big_files_temporary_dir = ""
	I1009 23:18:52.141372 1609109 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1009 23:18:52.141378 1609109 command_runner.go:130] > # CNI plugins.
	I1009 23:18:52.141383 1609109 command_runner.go:130] > [crio.network]
	I1009 23:18:52.141390 1609109 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1009 23:18:52.141396 1609109 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1009 23:18:52.141401 1609109 command_runner.go:130] > # cni_default_network = ""
	I1009 23:18:52.141408 1609109 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1009 23:18:52.141413 1609109 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1009 23:18:52.141420 1609109 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1009 23:18:52.141425 1609109 command_runner.go:130] > # plugin_dirs = [
	I1009 23:18:52.141430 1609109 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1009 23:18:52.141435 1609109 command_runner.go:130] > # ]
	I1009 23:18:52.141442 1609109 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1009 23:18:52.141447 1609109 command_runner.go:130] > [crio.metrics]
	I1009 23:18:52.141453 1609109 command_runner.go:130] > # Globally enable or disable metrics support.
	I1009 23:18:52.141458 1609109 command_runner.go:130] > # enable_metrics = false
	I1009 23:18:52.141465 1609109 command_runner.go:130] > # Specify enabled metrics collectors.
	I1009 23:18:52.141471 1609109 command_runner.go:130] > # Per default all metrics are enabled.
	I1009 23:18:52.141478 1609109 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1009 23:18:52.141486 1609109 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1009 23:18:52.141493 1609109 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1009 23:18:52.141497 1609109 command_runner.go:130] > # metrics_collectors = [
	I1009 23:18:52.141502 1609109 command_runner.go:130] > # 	"operations",
	I1009 23:18:52.141507 1609109 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1009 23:18:52.141513 1609109 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1009 23:18:52.141518 1609109 command_runner.go:130] > # 	"operations_errors",
	I1009 23:18:52.141523 1609109 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1009 23:18:52.141527 1609109 command_runner.go:130] > # 	"image_pulls_by_name",
	I1009 23:18:52.141533 1609109 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1009 23:18:52.141548 1609109 command_runner.go:130] > # 	"image_pulls_failures",
	I1009 23:18:52.141554 1609109 command_runner.go:130] > # 	"image_pulls_successes",
	I1009 23:18:52.141559 1609109 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1009 23:18:52.141564 1609109 command_runner.go:130] > # 	"image_layer_reuse",
	I1009 23:18:52.141569 1609109 command_runner.go:130] > # 	"containers_oom_total",
	I1009 23:18:52.141574 1609109 command_runner.go:130] > # 	"containers_oom",
	I1009 23:18:52.141578 1609109 command_runner.go:130] > # 	"processes_defunct",
	I1009 23:18:52.141583 1609109 command_runner.go:130] > # 	"operations_total",
	I1009 23:18:52.141588 1609109 command_runner.go:130] > # 	"operations_latency_seconds",
	I1009 23:18:52.141594 1609109 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1009 23:18:52.141599 1609109 command_runner.go:130] > # 	"operations_errors_total",
	I1009 23:18:52.141604 1609109 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1009 23:18:52.141609 1609109 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1009 23:18:52.141616 1609109 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1009 23:18:52.141621 1609109 command_runner.go:130] > # 	"image_pulls_success_total",
	I1009 23:18:52.141626 1609109 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1009 23:18:52.141632 1609109 command_runner.go:130] > # 	"containers_oom_count_total",
	I1009 23:18:52.141636 1609109 command_runner.go:130] > # ]
	I1009 23:18:52.141643 1609109 command_runner.go:130] > # The port on which the metrics server will listen.
	I1009 23:18:52.141648 1609109 command_runner.go:130] > # metrics_port = 9090
	I1009 23:18:52.141654 1609109 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1009 23:18:52.141659 1609109 command_runner.go:130] > # metrics_socket = ""
	I1009 23:18:52.141665 1609109 command_runner.go:130] > # The certificate for the secure metrics server.
	I1009 23:18:52.141672 1609109 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1009 23:18:52.141680 1609109 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1009 23:18:52.141686 1609109 command_runner.go:130] > # certificate on any modification event.
	I1009 23:18:52.141690 1609109 command_runner.go:130] > # metrics_cert = ""
	I1009 23:18:52.141696 1609109 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1009 23:18:52.141703 1609109 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1009 23:18:52.141708 1609109 command_runner.go:130] > # metrics_key = ""
	I1009 23:18:52.141715 1609109 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1009 23:18:52.141719 1609109 command_runner.go:130] > [crio.tracing]
	I1009 23:18:52.141726 1609109 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1009 23:18:52.141731 1609109 command_runner.go:130] > # enable_tracing = false
	I1009 23:18:52.141737 1609109 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1009 23:18:52.141742 1609109 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1009 23:18:52.141772 1609109 command_runner.go:130] > # Number of samples to collect per million spans.
	I1009 23:18:52.141778 1609109 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1009 23:18:52.141786 1609109 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1009 23:18:52.141790 1609109 command_runner.go:130] > [crio.stats]
	I1009 23:18:52.141797 1609109 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1009 23:18:52.141803 1609109 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1009 23:18:52.141810 1609109 command_runner.go:130] > # stats_collection_period = 0
	I1009 23:18:52.143696 1609109 command_runner.go:130] ! time="2023-10-09 23:18:52.134399728Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1009 23:18:52.143722 1609109 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1009 23:18:52.143828 1609109 cni.go:84] Creating CNI manager for ""
	I1009 23:18:52.143843 1609109 cni.go:136] 1 nodes found, recommending kindnet
	I1009 23:18:52.143873 1609109 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1009 23:18:52.143894 1609109 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-717678 NodeName:multinode-717678 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 23:18:52.144038 1609109 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-717678"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 23:18:52.144115 1609109 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-717678 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-717678 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1009 23:18:52.144178 1609109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1009 23:18:52.153706 1609109 command_runner.go:130] > kubeadm
	I1009 23:18:52.153886 1609109 command_runner.go:130] > kubectl
	I1009 23:18:52.153898 1609109 command_runner.go:130] > kubelet
	I1009 23:18:52.155073 1609109 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 23:18:52.155161 1609109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 23:18:52.165488 1609109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1009 23:18:52.186843 1609109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 23:18:52.207800 1609109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1009 23:18:52.229206 1609109 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1009 23:18:52.233890 1609109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 23:18:52.247103 1609109 certs.go:56] Setting up /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678 for IP: 192.168.58.2
	I1009 23:18:52.247165 1609109 certs.go:190] acquiring lock for shared ca certs: {Name:mk430c21a56d31b4f15423923c65864a3e3a3c9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:18:52.247327 1609109 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.key
	I1009 23:18:52.247368 1609109 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.key
	I1009 23:18:52.247415 1609109 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/client.key
	I1009 23:18:52.247426 1609109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/client.crt with IP's: []
	I1009 23:18:52.767732 1609109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/client.crt ...
	I1009 23:18:52.767763 1609109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/client.crt: {Name:mk3d220ba4038141a129fdd29c8e0ee717f5354f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:18:52.767966 1609109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/client.key ...
	I1009 23:18:52.767978 1609109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/client.key: {Name:mk40f6f96534c83916faa31c2446cfdedadc4649 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:18:52.768071 1609109 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/apiserver.key.cee25041
	I1009 23:18:52.768093 1609109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1009 23:18:53.316015 1609109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/apiserver.crt.cee25041 ...
	I1009 23:18:53.316046 1609109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/apiserver.crt.cee25041: {Name:mk38e32f8e2994e79ce0982399dce988753f9317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:18:53.316232 1609109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/apiserver.key.cee25041 ...
	I1009 23:18:53.316244 1609109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/apiserver.key.cee25041: {Name:mka0e7ff45a5d331570e7ccea327339c47c98169 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:18:53.316328 1609109 certs.go:337] copying /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/apiserver.crt
	I1009 23:18:53.316408 1609109 certs.go:341] copying /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/apiserver.key
	I1009 23:18:53.316465 1609109 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/proxy-client.key
	I1009 23:18:53.316481 1609109 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/proxy-client.crt with IP's: []
	I1009 23:18:54.126908 1609109 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/proxy-client.crt ...
	I1009 23:18:54.126949 1609109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/proxy-client.crt: {Name:mkae94a411ec65f73c013f73dc3571b020b80482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:18:54.127160 1609109 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/proxy-client.key ...
	I1009 23:18:54.127175 1609109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/proxy-client.key: {Name:mkd11f8d5e8da71cc34322c60b067518e50e2648 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:18:54.127253 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1009 23:18:54.127282 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1009 23:18:54.127295 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1009 23:18:54.127309 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1009 23:18:54.127320 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 23:18:54.127336 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 23:18:54.127351 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 23:18:54.127367 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 23:18:54.127428 1609109 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/1543215.pem (1338 bytes)
	W1009 23:18:54.127474 1609109 certs.go:433] ignoring /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/1543215_empty.pem, impossibly tiny 0 bytes
	I1009 23:18:54.127491 1609109 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 23:18:54.127519 1609109 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem (1078 bytes)
	I1009 23:18:54.127547 1609109 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem (1123 bytes)
	I1009 23:18:54.127580 1609109 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem (1679 bytes)
	I1009 23:18:54.127646 1609109 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem (1708 bytes)
	I1009 23:18:54.127687 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:18:54.127706 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/1543215.pem -> /usr/share/ca-certificates/1543215.pem
	I1009 23:18:54.127721 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem -> /usr/share/ca-certificates/15432152.pem
	I1009 23:18:54.128314 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1009 23:18:54.160602 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 23:18:54.189614 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 23:18:54.217810 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 23:18:54.246325 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 23:18:54.275813 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 23:18:54.304418 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 23:18:54.332871 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 23:18:54.361283 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 23:18:54.389864 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/1543215.pem --> /usr/share/ca-certificates/1543215.pem (1338 bytes)
	I1009 23:18:54.419229 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem --> /usr/share/ca-certificates/15432152.pem (1708 bytes)
	I1009 23:18:54.447774 1609109 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 23:18:54.468903 1609109 ssh_runner.go:195] Run: openssl version
	I1009 23:18:54.476073 1609109 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1009 23:18:54.476175 1609109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 23:18:54.487947 1609109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:18:54.492539 1609109 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  9 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:18:54.492574 1609109 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  9 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:18:54.492637 1609109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:18:54.501066 1609109 command_runner.go:130] > b5213941
	I1009 23:18:54.501427 1609109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 23:18:54.513054 1609109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1543215.pem && ln -fs /usr/share/ca-certificates/1543215.pem /etc/ssl/certs/1543215.pem"
	I1009 23:18:54.527643 1609109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1543215.pem
	I1009 23:18:54.532477 1609109 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  9 23:03 /usr/share/ca-certificates/1543215.pem
	I1009 23:18:54.532551 1609109 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  9 23:03 /usr/share/ca-certificates/1543215.pem
	I1009 23:18:54.532632 1609109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1543215.pem
	I1009 23:18:54.541278 1609109 command_runner.go:130] > 51391683
	I1009 23:18:54.541722 1609109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1543215.pem /etc/ssl/certs/51391683.0"
	I1009 23:18:54.553446 1609109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15432152.pem && ln -fs /usr/share/ca-certificates/15432152.pem /etc/ssl/certs/15432152.pem"
	I1009 23:18:54.565856 1609109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15432152.pem
	I1009 23:18:54.570523 1609109 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  9 23:03 /usr/share/ca-certificates/15432152.pem
	I1009 23:18:54.570557 1609109 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  9 23:03 /usr/share/ca-certificates/15432152.pem
	I1009 23:18:54.570611 1609109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15432152.pem
	I1009 23:18:54.579366 1609109 command_runner.go:130] > 3ec20f2e
	I1009 23:18:54.579472 1609109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15432152.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 23:18:54.591181 1609109 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1009 23:18:54.595594 1609109 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1009 23:18:54.595673 1609109 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1009 23:18:54.595733 1609109 kubeadm.go:404] StartCluster: {Name:multinode-717678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-717678 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 23:18:54.595818 1609109 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 23:18:54.595882 1609109 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 23:18:54.638041 1609109 cri.go:89] found id: ""
	I1009 23:18:54.638183 1609109 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 23:18:54.649067 1609109 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1009 23:18:54.649096 1609109 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1009 23:18:54.649104 1609109 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1009 23:18:54.649181 1609109 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 23:18:54.660137 1609109 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1009 23:18:54.660248 1609109 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 23:18:54.671305 1609109 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1009 23:18:54.671333 1609109 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1009 23:18:54.671342 1609109 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1009 23:18:54.671352 1609109 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 23:18:54.671382 1609109 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 23:18:54.671418 1609109 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 23:18:54.726998 1609109 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1009 23:18:54.727028 1609109 command_runner.go:130] > [init] Using Kubernetes version: v1.28.2
	I1009 23:18:54.727547 1609109 kubeadm.go:322] [preflight] Running pre-flight checks
	I1009 23:18:54.727616 1609109 command_runner.go:130] > [preflight] Running pre-flight checks
	I1009 23:18:54.774317 1609109 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1009 23:18:54.774345 1609109 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1009 23:18:54.774417 1609109 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-aws
	I1009 23:18:54.774431 1609109 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-aws
	I1009 23:18:54.774475 1609109 kubeadm.go:322] OS: Linux
	I1009 23:18:54.774487 1609109 command_runner.go:130] > OS: Linux
	I1009 23:18:54.774548 1609109 kubeadm.go:322] CGROUPS_CPU: enabled
	I1009 23:18:54.774563 1609109 command_runner.go:130] > CGROUPS_CPU: enabled
	I1009 23:18:54.774620 1609109 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1009 23:18:54.774630 1609109 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1009 23:18:54.774683 1609109 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1009 23:18:54.774695 1609109 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1009 23:18:54.774750 1609109 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1009 23:18:54.774762 1609109 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1009 23:18:54.774816 1609109 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1009 23:18:54.774827 1609109 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1009 23:18:54.774884 1609109 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1009 23:18:54.774898 1609109 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1009 23:18:54.774979 1609109 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1009 23:18:54.774994 1609109 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1009 23:18:54.775050 1609109 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1009 23:18:54.775064 1609109 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1009 23:18:54.775149 1609109 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1009 23:18:54.775161 1609109 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1009 23:18:54.859153 1609109 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 23:18:54.859221 1609109 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 23:18:54.859369 1609109 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 23:18:54.859401 1609109 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 23:18:54.859530 1609109 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 23:18:54.859554 1609109 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 23:18:55.152038 1609109 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 23:18:55.157428 1609109 out.go:204]   - Generating certificates and keys ...
	I1009 23:18:55.152327 1609109 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 23:18:55.157695 1609109 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1009 23:18:55.157731 1609109 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1009 23:18:55.157838 1609109 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1009 23:18:55.157866 1609109 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1009 23:18:55.665376 1609109 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 23:18:55.665461 1609109 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 23:18:55.818078 1609109 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1009 23:18:55.818145 1609109 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1009 23:18:56.057013 1609109 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1009 23:18:56.057036 1609109 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1009 23:18:56.686692 1609109 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1009 23:18:56.686717 1609109 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1009 23:18:57.282218 1609109 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1009 23:18:57.282246 1609109 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1009 23:18:57.282528 1609109 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-717678] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1009 23:18:57.282573 1609109 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-717678] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1009 23:18:57.890235 1609109 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1009 23:18:57.890263 1609109 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1009 23:18:57.890565 1609109 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-717678] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1009 23:18:57.890583 1609109 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-717678] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1009 23:18:58.470883 1609109 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 23:18:58.470911 1609109 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 23:18:59.439806 1609109 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 23:18:59.439834 1609109 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 23:18:59.812764 1609109 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1009 23:18:59.812788 1609109 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1009 23:18:59.813105 1609109 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 23:18:59.813118 1609109 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 23:19:00.333574 1609109 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 23:19:00.333619 1609109 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 23:19:00.816600 1609109 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 23:19:00.816624 1609109 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 23:19:01.405705 1609109 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 23:19:01.405729 1609109 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 23:19:02.185714 1609109 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 23:19:02.185737 1609109 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 23:19:02.186506 1609109 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 23:19:02.186523 1609109 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 23:19:02.189267 1609109 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 23:19:02.192069 1609109 out.go:204]   - Booting up control plane ...
	I1009 23:19:02.189356 1609109 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 23:19:02.192168 1609109 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 23:19:02.192178 1609109 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 23:19:02.192292 1609109 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 23:19:02.192298 1609109 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 23:19:02.192591 1609109 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 23:19:02.192603 1609109 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 23:19:02.206755 1609109 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 23:19:02.206783 1609109 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 23:19:02.207633 1609109 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 23:19:02.207655 1609109 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 23:19:02.207974 1609109 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1009 23:19:02.207987 1609109 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1009 23:19:02.308256 1609109 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 23:19:02.308283 1609109 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 23:19:09.809983 1609109 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.502304 seconds
	I1009 23:19:09.810009 1609109 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.502304 seconds
	I1009 23:19:09.810109 1609109 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 23:19:09.810118 1609109 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 23:19:09.828243 1609109 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 23:19:09.828267 1609109 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 23:19:10.358418 1609109 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 23:19:10.358446 1609109 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1009 23:19:10.358618 1609109 kubeadm.go:322] [mark-control-plane] Marking the node multinode-717678 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 23:19:10.358636 1609109 command_runner.go:130] > [mark-control-plane] Marking the node multinode-717678 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 23:19:10.871853 1609109 kubeadm.go:322] [bootstrap-token] Using token: zzqywk.2biwhj14gpe1oel9
	I1009 23:19:10.874253 1609109 out.go:204]   - Configuring RBAC rules ...
	I1009 23:19:10.871959 1609109 command_runner.go:130] > [bootstrap-token] Using token: zzqywk.2biwhj14gpe1oel9
	I1009 23:19:10.874372 1609109 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 23:19:10.874384 1609109 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 23:19:10.881185 1609109 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 23:19:10.881207 1609109 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 23:19:10.890302 1609109 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 23:19:10.890343 1609109 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 23:19:10.894590 1609109 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 23:19:10.894612 1609109 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 23:19:10.900156 1609109 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 23:19:10.900181 1609109 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 23:19:10.904868 1609109 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 23:19:10.904895 1609109 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 23:19:10.920718 1609109 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 23:19:10.920746 1609109 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 23:19:11.161181 1609109 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1009 23:19:11.161250 1609109 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1009 23:19:11.293271 1609109 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1009 23:19:11.293304 1609109 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1009 23:19:11.293310 1609109 kubeadm.go:322] 
	I1009 23:19:11.293367 1609109 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1009 23:19:11.293377 1609109 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1009 23:19:11.293382 1609109 kubeadm.go:322] 
	I1009 23:19:11.293455 1609109 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1009 23:19:11.293464 1609109 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1009 23:19:11.293469 1609109 kubeadm.go:322] 
	I1009 23:19:11.293493 1609109 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1009 23:19:11.293502 1609109 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1009 23:19:11.293557 1609109 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 23:19:11.293565 1609109 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 23:19:11.293611 1609109 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 23:19:11.293620 1609109 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 23:19:11.293624 1609109 kubeadm.go:322] 
	I1009 23:19:11.293675 1609109 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1009 23:19:11.293686 1609109 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1009 23:19:11.293691 1609109 kubeadm.go:322] 
	I1009 23:19:11.293735 1609109 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 23:19:11.293743 1609109 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 23:19:11.293747 1609109 kubeadm.go:322] 
	I1009 23:19:11.293796 1609109 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1009 23:19:11.293803 1609109 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1009 23:19:11.293873 1609109 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 23:19:11.293880 1609109 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 23:19:11.293943 1609109 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 23:19:11.293948 1609109 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 23:19:11.293952 1609109 kubeadm.go:322] 
	I1009 23:19:11.294031 1609109 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 23:19:11.294036 1609109 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1009 23:19:11.294107 1609109 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1009 23:19:11.294112 1609109 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1009 23:19:11.294118 1609109 kubeadm.go:322] 
	I1009 23:19:11.294196 1609109 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zzqywk.2biwhj14gpe1oel9 \
	I1009 23:19:11.294201 1609109 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token zzqywk.2biwhj14gpe1oel9 \
	I1009 23:19:11.294296 1609109 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e2aebf53348f507bad0adab8a765b229b70810954e22f1e7a919941009267e3f \
	I1009 23:19:11.294302 1609109 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:e2aebf53348f507bad0adab8a765b229b70810954e22f1e7a919941009267e3f \
	I1009 23:19:11.294321 1609109 kubeadm.go:322] 	--control-plane 
	I1009 23:19:11.294325 1609109 command_runner.go:130] > 	--control-plane 
	I1009 23:19:11.294329 1609109 kubeadm.go:322] 
	I1009 23:19:11.294408 1609109 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1009 23:19:11.294413 1609109 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1009 23:19:11.294417 1609109 kubeadm.go:322] 
	I1009 23:19:11.294496 1609109 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zzqywk.2biwhj14gpe1oel9 \
	I1009 23:19:11.294500 1609109 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token zzqywk.2biwhj14gpe1oel9 \
	I1009 23:19:11.294595 1609109 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e2aebf53348f507bad0adab8a765b229b70810954e22f1e7a919941009267e3f 
	I1009 23:19:11.294600 1609109 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:e2aebf53348f507bad0adab8a765b229b70810954e22f1e7a919941009267e3f 
	I1009 23:19:11.300672 1609109 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1009 23:19:11.300694 1609109 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1009 23:19:11.300794 1609109 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 23:19:11.300799 1609109 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 23:19:11.300811 1609109 cni.go:84] Creating CNI manager for ""
	I1009 23:19:11.300817 1609109 cni.go:136] 1 nodes found, recommending kindnet
	I1009 23:19:11.303187 1609109 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1009 23:19:11.305202 1609109 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 23:19:11.325316 1609109 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1009 23:19:11.325346 1609109 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1009 23:19:11.325355 1609109 command_runner.go:130] > Device: 3ah/58d	Inode: 1308851     Links: 1
	I1009 23:19:11.325363 1609109 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 23:19:11.325378 1609109 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1009 23:19:11.325385 1609109 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1009 23:19:11.325391 1609109 command_runner.go:130] > Change: 2023-10-09 22:55:10.333391806 +0000
	I1009 23:19:11.325402 1609109 command_runner.go:130] >  Birth: 2023-10-09 22:55:10.293391681 +0000
	I1009 23:19:11.326676 1609109 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1009 23:19:11.326701 1609109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1009 23:19:11.376567 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 23:19:12.254117 1609109 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1009 23:19:12.264022 1609109 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1009 23:19:12.273066 1609109 command_runner.go:130] > serviceaccount/kindnet created
	I1009 23:19:12.285329 1609109 command_runner.go:130] > daemonset.apps/kindnet created
	I1009 23:19:12.291604 1609109 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 23:19:12.291701 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:12.291726 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90 minikube.k8s.io/name=multinode-717678 minikube.k8s.io/updated_at=2023_10_09T23_19_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:12.427973 1609109 command_runner.go:130] > node/multinode-717678 labeled
	I1009 23:19:12.431735 1609109 command_runner.go:130] > -16
	I1009 23:19:12.431772 1609109 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1009 23:19:12.431796 1609109 ops.go:34] apiserver oom_adj: -16
	I1009 23:19:12.431869 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:12.535179 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:12.539060 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:12.635146 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:13.135682 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:13.225304 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:13.635944 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:13.726733 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:14.136163 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:14.222624 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:14.635668 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:14.726451 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:15.137994 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:15.243802 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:15.635578 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:15.723605 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:16.136090 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:16.231010 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:16.635497 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:16.724840 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:17.135907 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:17.227061 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:17.635419 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:17.730419 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:18.136135 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:18.228170 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:18.635838 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:18.725442 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:19.135903 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:19.232095 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:19.635418 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:19.730376 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:20.136024 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:20.246765 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:20.635483 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:20.729326 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:21.136016 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:21.236155 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:21.635394 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:21.727852 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:22.135742 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:22.234388 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:22.636098 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:22.731766 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:23.136328 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:23.232513 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:23.635838 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:23.729052 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:24.135740 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:24.226456 1609109 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1009 23:19:24.635944 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:19:24.734181 1609109 command_runner.go:130] > NAME      SECRETS   AGE
	I1009 23:19:24.734200 1609109 command_runner.go:130] > default   0         0s
	I1009 23:19:24.737575 1609109 kubeadm.go:1081] duration metric: took 12.445943273s to wait for elevateKubeSystemPrivileges.
	I1009 23:19:24.737601 1609109 kubeadm.go:406] StartCluster complete in 30.14187241s
	I1009 23:19:24.737619 1609109 settings.go:142] acquiring lock: {Name:mkeeac28244e9503bae3d91ba3a5c4a3392545f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:19:24.737699 1609109 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17375-1537865/kubeconfig
	I1009 23:19:24.738416 1609109 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/kubeconfig: {Name:mk913f33f2148d9a5b250c16fc9df0a8782f9275 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:19:24.738653 1609109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 23:19:24.738922 1609109 config.go:182] Loaded profile config "multinode-717678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1009 23:19:24.738945 1609109 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17375-1537865/kubeconfig
	I1009 23:19:24.738955 1609109 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1009 23:19:24.739026 1609109 addons.go:69] Setting storage-provisioner=true in profile "multinode-717678"
	I1009 23:19:24.739040 1609109 addons.go:231] Setting addon storage-provisioner=true in "multinode-717678"
	I1009 23:19:24.739078 1609109 host.go:66] Checking if "multinode-717678" exists ...
	I1009 23:19:24.739264 1609109 kapi.go:59] client config for multinode-717678: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/client.crt", KeyFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/client.key", CAFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b67c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 23:19:24.739667 1609109 cli_runner.go:164] Run: docker container inspect multinode-717678 --format={{.State.Status}}
	I1009 23:19:24.740148 1609109 addons.go:69] Setting default-storageclass=true in profile "multinode-717678"
	I1009 23:19:24.740169 1609109 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-717678"
	I1009 23:19:24.740477 1609109 cli_runner.go:164] Run: docker container inspect multinode-717678 --format={{.State.Status}}
	I1009 23:19:24.740507 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1009 23:19:24.740519 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:24.740528 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:24.740537 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:24.740747 1609109 cert_rotation.go:137] Starting client certificate rotation controller
	I1009 23:19:24.769527 1609109 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I1009 23:19:24.769548 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:24.769557 1609109 round_trippers.go:580]     Audit-Id: ac537281-c6de-47f1-9a9c-74f4bbb8ceca
	I1009 23:19:24.769563 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:24.769570 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:24.769576 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:24.769584 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:24.769590 1609109 round_trippers.go:580]     Content-Length: 291
	I1009 23:19:24.769596 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:24 GMT
	I1009 23:19:24.769630 1609109 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a040aa83-288a-42e7-9e24-15b47b6337a4","resourceVersion":"219","creationTimestamp":"2023-10-09T23:19:11Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1009 23:19:24.770019 1609109 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a040aa83-288a-42e7-9e24-15b47b6337a4","resourceVersion":"219","creationTimestamp":"2023-10-09T23:19:11Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1009 23:19:24.770066 1609109 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1009 23:19:24.770072 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:24.770080 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:24.770087 1609109 round_trippers.go:473]     Content-Type: application/json
	I1009 23:19:24.770093 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:24.812892 1609109 round_trippers.go:574] Response Status: 200 OK in 42 milliseconds
	I1009 23:19:24.812919 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:24.812928 1609109 round_trippers.go:580]     Audit-Id: 226c7d7c-ba18-4781-adb0-30921af77e4f
	I1009 23:19:24.812935 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:24.812941 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:24.812948 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:24.812954 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:24.812960 1609109 round_trippers.go:580]     Content-Length: 291
	I1009 23:19:24.812966 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:24 GMT
	I1009 23:19:24.812994 1609109 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a040aa83-288a-42e7-9e24-15b47b6337a4","resourceVersion":"309","creationTimestamp":"2023-10-09T23:19:11Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1009 23:19:24.813196 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1009 23:19:24.813211 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:24.813219 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:24.813232 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:24.827135 1609109 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 23:19:24.823966 1609109 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17375-1537865/kubeconfig
	I1009 23:19:24.829680 1609109 kapi.go:59] client config for multinode-717678: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/client.crt", KeyFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/client.key", CAFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b67c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 23:19:24.829966 1609109 addons.go:231] Setting addon default-storageclass=true in "multinode-717678"
	I1009 23:19:24.830001 1609109 host.go:66] Checking if "multinode-717678" exists ...
	I1009 23:19:24.830459 1609109 cli_runner.go:164] Run: docker container inspect multinode-717678 --format={{.State.Status}}
	I1009 23:19:24.830755 1609109 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 23:19:24.830772 1609109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 23:19:24.830818 1609109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678
	I1009 23:19:24.833452 1609109 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1009 23:19:24.833474 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:24.833482 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:24.833488 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:24.833495 1609109 round_trippers.go:580]     Content-Length: 291
	I1009 23:19:24.833501 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:24 GMT
	I1009 23:19:24.833507 1609109 round_trippers.go:580]     Audit-Id: 8a13500c-32f7-4fe8-8ead-0633ec179475
	I1009 23:19:24.833513 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:24.833519 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:24.845688 1609109 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a040aa83-288a-42e7-9e24-15b47b6337a4","resourceVersion":"309","creationTimestamp":"2023-10-09T23:19:11Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1009 23:19:24.845802 1609109 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-717678" context rescaled to 1 replicas
	I1009 23:19:24.845828 1609109 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 23:19:24.848027 1609109 out.go:177] * Verifying Kubernetes components...
	I1009 23:19:24.850101 1609109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 23:19:24.878057 1609109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34434 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678/id_rsa Username:docker}
	I1009 23:19:24.884982 1609109 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 23:19:24.885004 1609109 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 23:19:24.885064 1609109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678
	I1009 23:19:24.916481 1609109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34434 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678/id_rsa Username:docker}
	I1009 23:19:24.937860 1609109 command_runner.go:130] > apiVersion: v1
	I1009 23:19:24.937880 1609109 command_runner.go:130] > data:
	I1009 23:19:24.937886 1609109 command_runner.go:130] >   Corefile: |
	I1009 23:19:24.937891 1609109 command_runner.go:130] >     .:53 {
	I1009 23:19:24.937896 1609109 command_runner.go:130] >         errors
	I1009 23:19:24.937904 1609109 command_runner.go:130] >         health {
	I1009 23:19:24.937909 1609109 command_runner.go:130] >            lameduck 5s
	I1009 23:19:24.937914 1609109 command_runner.go:130] >         }
	I1009 23:19:24.937919 1609109 command_runner.go:130] >         ready
	I1009 23:19:24.937934 1609109 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1009 23:19:24.937946 1609109 command_runner.go:130] >            pods insecure
	I1009 23:19:24.937958 1609109 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1009 23:19:24.937964 1609109 command_runner.go:130] >            ttl 30
	I1009 23:19:24.937974 1609109 command_runner.go:130] >         }
	I1009 23:19:24.937979 1609109 command_runner.go:130] >         prometheus :9153
	I1009 23:19:24.937986 1609109 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1009 23:19:24.937996 1609109 command_runner.go:130] >            max_concurrent 1000
	I1009 23:19:24.938001 1609109 command_runner.go:130] >         }
	I1009 23:19:24.938007 1609109 command_runner.go:130] >         cache 30
	I1009 23:19:24.938016 1609109 command_runner.go:130] >         loop
	I1009 23:19:24.938022 1609109 command_runner.go:130] >         reload
	I1009 23:19:24.938027 1609109 command_runner.go:130] >         loadbalance
	I1009 23:19:24.938032 1609109 command_runner.go:130] >     }
	I1009 23:19:24.938040 1609109 command_runner.go:130] > kind: ConfigMap
	I1009 23:19:24.938045 1609109 command_runner.go:130] > metadata:
	I1009 23:19:24.938055 1609109 command_runner.go:130] >   creationTimestamp: "2023-10-09T23:19:11Z"
	I1009 23:19:24.938061 1609109 command_runner.go:130] >   name: coredns
	I1009 23:19:24.938066 1609109 command_runner.go:130] >   namespace: kube-system
	I1009 23:19:24.938076 1609109 command_runner.go:130] >   resourceVersion: "215"
	I1009 23:19:24.938082 1609109 command_runner.go:130] >   uid: 65d46ec0-dd74-44f9-8503-41d4cb47106f
	I1009 23:19:24.942036 1609109 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 23:19:24.942460 1609109 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17375-1537865/kubeconfig
	I1009 23:19:24.942713 1609109 kapi.go:59] client config for multinode-717678: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/client.crt", KeyFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/client.key", CAFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b67c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 23:19:24.942971 1609109 node_ready.go:35] waiting up to 6m0s for node "multinode-717678" to be "Ready" ...
	I1009 23:19:24.943069 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:24.943079 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:24.943088 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:24.943095 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:24.980891 1609109 round_trippers.go:574] Response Status: 200 OK in 37 milliseconds
	I1009 23:19:24.980912 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:24.980921 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:24.980928 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:24 GMT
	I1009 23:19:24.980935 1609109 round_trippers.go:580]     Audit-Id: b42081a8-0e6d-47a6-8d9e-5b26f9132ab4
	I1009 23:19:24.980941 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:24.980948 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:24.980954 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:24.982017 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"303","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:1
9:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 6119 chars]
	I1009 23:19:24.982748 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:24.982758 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:24.982767 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:24.982774 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:24.991580 1609109 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1009 23:19:24.991602 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:24.991611 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:24.991617 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:24.991624 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:24.991630 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:24.991636 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:24 GMT
	I1009 23:19:24.991642 1609109 round_trippers.go:580]     Audit-Id: 8b99843b-20bd-433f-b0e4-bafa80c28712
	I1009 23:19:24.992250 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"303","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:1
9:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 6119 chars]
	I1009 23:19:25.093267 1609109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 23:19:25.114433 1609109 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 23:19:25.493106 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:25.493180 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:25.493214 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:25.493240 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:25.605647 1609109 round_trippers.go:574] Response Status: 200 OK in 112 milliseconds
	I1009 23:19:25.605723 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:25.605753 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:25.605785 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:25.605807 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:25.605825 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:25.605845 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:25 GMT
	I1009 23:19:25.605864 1609109 round_trippers.go:580]     Audit-Id: a79cbac1-3c30-4e68-87c1-721432cebd6c
	I1009 23:19:25.621807 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:25.679988 1609109 command_runner.go:130] > configmap/coredns replaced
	I1009 23:19:25.680057 1609109 start.go:926] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1009 23:19:25.681819 1609109 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1009 23:19:25.688295 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1009 23:19:25.688372 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:25.688394 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:25.688413 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:25.753335 1609109 round_trippers.go:574] Response Status: 200 OK in 64 milliseconds
	I1009 23:19:25.753399 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:25.753420 1609109 round_trippers.go:580]     Content-Length: 1273
	I1009 23:19:25.753447 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:25 GMT
	I1009 23:19:25.753478 1609109 round_trippers.go:580]     Audit-Id: 1d8c97a3-99fb-48ef-86ee-d71e31409a4f
	I1009 23:19:25.753501 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:25.753522 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:25.753550 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:25.753578 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:25.754970 1609109 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"350"},"items":[{"metadata":{"name":"standard","uid":"dc0769c0-4709-4491-add4-7d78c90368f7","resourceVersion":"343","creationTimestamp":"2023-10-09T23:19:25Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-09T23:19:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1009 23:19:25.755447 1609109 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"dc0769c0-4709-4491-add4-7d78c90368f7","resourceVersion":"343","creationTimestamp":"2023-10-09T23:19:25Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-09T23:19:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1009 23:19:25.755529 1609109 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1009 23:19:25.755564 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:25.755589 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:25.755608 1609109 round_trippers.go:473]     Content-Type: application/json
	I1009 23:19:25.755627 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:25.776461 1609109 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I1009 23:19:25.776536 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:25.776560 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:25 GMT
	I1009 23:19:25.776578 1609109 round_trippers.go:580]     Audit-Id: 579c176e-e379-45e4-8dea-b2d6cd2cf3ee
	I1009 23:19:25.776594 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:25.776627 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:25.776645 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:25.776662 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:25.776678 1609109 round_trippers.go:580]     Content-Length: 1220
	I1009 23:19:25.778384 1609109 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"dc0769c0-4709-4491-add4-7d78c90368f7","resourceVersion":"343","creationTimestamp":"2023-10-09T23:19:25Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-10-09T23:19:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1009 23:19:25.858447 1609109 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1009 23:19:25.867623 1609109 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1009 23:19:25.890578 1609109 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1009 23:19:25.909436 1609109 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1009 23:19:25.920859 1609109 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1009 23:19:25.934944 1609109 command_runner.go:130] > pod/storage-provisioner created
	I1009 23:19:25.943014 1609109 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1009 23:19:25.946341 1609109 addons.go:502] enable addons completed in 1.207372495s: enabled=[default-storageclass storage-provisioner]
	I1009 23:19:25.993830 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:25.993856 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:25.993866 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:25.993873 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:25.996350 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:25.996375 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:25.996384 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:25.996390 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:25.996397 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:25 GMT
	I1009 23:19:25.996403 1609109 round_trippers.go:580]     Audit-Id: 2d07cab7-50cf-44d0-b0a0-c652d73a174a
	I1009 23:19:25.996409 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:25.996415 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:25.996676 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:26.493627 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:26.493652 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:26.493663 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:26.493676 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:26.496110 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:26.496133 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:26.496144 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:26 GMT
	I1009 23:19:26.496177 1609109 round_trippers.go:580]     Audit-Id: 049e8380-f3f4-4a8f-82b1-97d995b18d52
	I1009 23:19:26.496192 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:26.496199 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:26.496209 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:26.496216 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:26.496437 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:26.992877 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:26.992902 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:26.992911 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:26.992918 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:26.995603 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:26.995624 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:26.995633 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:26.995639 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:26 GMT
	I1009 23:19:26.995645 1609109 round_trippers.go:580]     Audit-Id: dcabbb6d-a218-40ef-b1a7-4cd89854e8b1
	I1009 23:19:26.995652 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:26.995658 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:26.995664 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:26.996382 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:26.996786 1609109 node_ready.go:58] node "multinode-717678" has status "Ready":"False"
	I1009 23:19:27.493566 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:27.493588 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:27.493598 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:27.493605 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:27.496238 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:27.496297 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:27.496319 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:27.496341 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:27.496370 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:27 GMT
	I1009 23:19:27.496392 1609109 round_trippers.go:580]     Audit-Id: 63b07033-f626-4a4e-9665-e9a7184f1cd2
	I1009 23:19:27.496412 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:27.496431 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:27.496552 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:27.992880 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:27.992907 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:27.992917 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:27.992925 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:27.995595 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:27.995622 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:27.995630 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:27.995638 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:27 GMT
	I1009 23:19:27.995644 1609109 round_trippers.go:580]     Audit-Id: b8e5fd96-73ff-4eb6-a1a5-42d506440c9a
	I1009 23:19:27.995650 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:27.995659 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:27.995668 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:27.995791 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:28.492900 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:28.492924 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:28.492933 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:28.492941 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:28.495686 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:28.495712 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:28.495721 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:28.495728 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:28.495735 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:28.495741 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:28.495748 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:28 GMT
	I1009 23:19:28.495757 1609109 round_trippers.go:580]     Audit-Id: e0bae7a4-921d-4100-8ab5-67b541e5e929
	I1009 23:19:28.495920 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:28.992896 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:28.992921 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:28.992932 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:28.992939 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:28.995665 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:28.995763 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:28.995780 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:28.995788 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:28.995795 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:28 GMT
	I1009 23:19:28.995811 1609109 round_trippers.go:580]     Audit-Id: 550981fe-5ffc-4ba2-8c18-5e7442f8cb3e
	I1009 23:19:28.995820 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:28.995826 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:28.995954 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:29.493494 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:29.493516 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:29.493526 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:29.493537 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:29.496087 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:29.496115 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:29.496124 1609109 round_trippers.go:580]     Audit-Id: 9e924836-8f15-4b85-8831-774f6f6b3e71
	I1009 23:19:29.496130 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:29.496137 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:29.496144 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:29.496150 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:29.496159 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:29 GMT
	I1009 23:19:29.496286 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:29.496718 1609109 node_ready.go:58] node "multinode-717678" has status "Ready":"False"
	I1009 23:19:29.992969 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:29.992991 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:29.993002 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:29.993010 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:29.995574 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:29.995611 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:29.995619 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:29 GMT
	I1009 23:19:29.995626 1609109 round_trippers.go:580]     Audit-Id: 0d6611c5-aed1-48ad-add0-ab62592930d2
	I1009 23:19:29.995649 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:29.995662 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:29.995670 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:29.995677 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:29.995902 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:30.493613 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:30.493638 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:30.493648 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:30.493655 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:30.496873 1609109 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:19:30.496898 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:30.496907 1609109 round_trippers.go:580]     Audit-Id: 8bfb3512-81c4-4ee6-afa7-5ada124ba84d
	I1009 23:19:30.496913 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:30.496920 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:30.496947 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:30.496962 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:30.496969 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:30 GMT
	I1009 23:19:30.497116 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:30.993292 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:30.993317 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:30.993327 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:30.993335 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:30.995967 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:30.996032 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:30.996054 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:30.996073 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:30.996103 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:30.996125 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:30 GMT
	I1009 23:19:30.996145 1609109 round_trippers.go:580]     Audit-Id: 8ddc269f-f450-447a-b79f-6c439338c8ef
	I1009 23:19:30.996165 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:30.996311 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:31.492844 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:31.492868 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:31.492878 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:31.492886 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:31.495724 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:31.495751 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:31.495760 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:31.495767 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:31.495773 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:31.495781 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:31 GMT
	I1009 23:19:31.495788 1609109 round_trippers.go:580]     Audit-Id: 81f0c4d0-bb71-4dc7-9659-237a21c8ccfa
	I1009 23:19:31.495794 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:31.495891 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:31.992976 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:31.993001 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:31.993011 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:31.993018 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:31.996877 1609109 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:19:31.996899 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:31.996907 1609109 round_trippers.go:580]     Audit-Id: c4fe2d98-a733-4de8-8e7a-819a0c58c1ef
	I1009 23:19:31.996914 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:31.996920 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:31.996926 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:31.996932 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:31.996938 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:31 GMT
	I1009 23:19:31.997199 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:31.997607 1609109 node_ready.go:58] node "multinode-717678" has status "Ready":"False"
	I1009 23:19:32.493755 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:32.493780 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:32.493791 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:32.493798 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:32.496532 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:32.496556 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:32.496565 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:32 GMT
	I1009 23:19:32.496572 1609109 round_trippers.go:580]     Audit-Id: dcff835e-393b-4639-9d5d-4daf64c8fd2e
	I1009 23:19:32.496578 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:32.496587 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:32.496593 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:32.496600 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:32.496717 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:32.992874 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:32.992896 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:32.992906 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:32.992914 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:32.995615 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:32.995636 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:32.995645 1609109 round_trippers.go:580]     Audit-Id: 2d7ba6af-26b9-4d14-ab8e-35fa4a43d791
	I1009 23:19:32.995652 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:32.995658 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:32.995664 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:32.995671 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:32.995678 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:32 GMT
	I1009 23:19:32.995826 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:33.492885 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:33.492909 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:33.492919 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:33.492927 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:33.495463 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:33.495486 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:33.495494 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:33 GMT
	I1009 23:19:33.495501 1609109 round_trippers.go:580]     Audit-Id: b23a6ece-4b3c-42ac-8639-363630f2ed09
	I1009 23:19:33.495507 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:33.495515 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:33.495521 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:33.495528 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:33.495667 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:33.993045 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:33.993073 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:33.993082 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:33.993091 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:33.995687 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:33.995720 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:33.995735 1609109 round_trippers.go:580]     Audit-Id: 093670a1-7d84-4f01-a8b6-64394d2a133f
	I1009 23:19:33.995742 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:33.995750 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:33.995762 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:33.995769 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:33.995778 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:33 GMT
	I1009 23:19:33.995968 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:34.492873 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:34.492898 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:34.492907 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:34.492921 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:34.495538 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:34.495561 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:34.495569 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:34.495577 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:34.495584 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:34.495590 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:34 GMT
	I1009 23:19:34.495597 1609109 round_trippers.go:580]     Audit-Id: fa5252b5-d3bb-4717-81d1-89465323bf6c
	I1009 23:19:34.495603 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:34.495753 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:34.496172 1609109 node_ready.go:58] node "multinode-717678" has status "Ready":"False"
	I1009 23:19:34.992822 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:34.992848 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:34.992858 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:34.992865 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:34.995584 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:34.995614 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:34.995624 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:34.995631 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:34.995638 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:34.995644 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:34.995651 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:34 GMT
	I1009 23:19:34.995657 1609109 round_trippers.go:580]     Audit-Id: 4d980384-658c-4280-8b13-94e35927b933
	I1009 23:19:34.995794 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:35.492913 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:35.492960 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:35.492970 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:35.492977 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:35.495595 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:35.495620 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:35.495629 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:35.495636 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:35.495647 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:35 GMT
	I1009 23:19:35.495663 1609109 round_trippers.go:580]     Audit-Id: 9df36518-bd96-498a-a7fb-05ee4ccd5bdc
	I1009 23:19:35.495670 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:35.495676 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:35.495994 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:35.993842 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:35.993867 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:35.993877 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:35.993884 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:35.996427 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:35.996450 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:35.996459 1609109 round_trippers.go:580]     Audit-Id: deeb58cc-8e75-468a-85a2-d405925c1045
	I1009 23:19:35.996465 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:35.996471 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:35.996477 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:35.996484 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:35.996490 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:35 GMT
	I1009 23:19:35.996682 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:36.493411 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:36.493435 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:36.493446 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:36.493453 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:36.496110 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:36.496135 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:36.496144 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:36.496151 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:36.496158 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:36.496164 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:36 GMT
	I1009 23:19:36.496171 1609109 round_trippers.go:580]     Audit-Id: 72eb2a7c-1492-44db-8fe3-29462795a46b
	I1009 23:19:36.496181 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:36.496574 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:36.496976 1609109 node_ready.go:58] node "multinode-717678" has status "Ready":"False"
	I1009 23:19:36.993777 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:36.993815 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:36.993826 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:36.993834 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:36.996587 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:36.996611 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:36.996619 1609109 round_trippers.go:580]     Audit-Id: 56279ca0-5d81-4dc4-83f5-1e177441704b
	I1009 23:19:36.996626 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:36.996632 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:36.996639 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:36.996645 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:36.996656 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:36 GMT
	I1009 23:19:36.996767 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:37.493254 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:37.493281 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:37.493291 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:37.493301 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:37.495790 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:37.495812 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:37.495821 1609109 round_trippers.go:580]     Audit-Id: bfa00753-8eb2-4cf9-8058-2610866afc03
	I1009 23:19:37.495828 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:37.495834 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:37.495840 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:37.495847 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:37.495855 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:37 GMT
	I1009 23:19:37.495973 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:37.993700 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:37.993727 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:37.993737 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:37.993745 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:37.996547 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:37.996567 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:37.996576 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:37.996583 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:37.996589 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:37.996604 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:37.996612 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:37 GMT
	I1009 23:19:37.996618 1609109 round_trippers.go:580]     Audit-Id: f39489f4-7a10-4c2b-a568-10eda9bd656c
	I1009 23:19:37.996724 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:38.493038 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:38.493060 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:38.493070 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:38.493077 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:38.495557 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:38.495638 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:38.495660 1609109 round_trippers.go:580]     Audit-Id: 8a10df25-1672-4c2e-b92b-78748ed2963b
	I1009 23:19:38.495686 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:38.495694 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:38.495701 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:38.495707 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:38.495714 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:38 GMT
	I1009 23:19:38.495862 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:38.993323 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:38.993348 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:38.993358 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:38.993366 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:38.995959 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:38.996027 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:38.996049 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:38 GMT
	I1009 23:19:38.996072 1609109 round_trippers.go:580]     Audit-Id: 381bbad1-e501-4afe-8c61-75d47badce62
	I1009 23:19:38.996105 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:38.996129 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:38.996149 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:38.996168 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:38.996307 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:38.996732 1609109 node_ready.go:58] node "multinode-717678" has status "Ready":"False"
	I1009 23:19:39.493840 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:39.493866 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:39.493876 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:39.493884 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:39.496409 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:39.496436 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:39.496445 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:39 GMT
	I1009 23:19:39.496452 1609109 round_trippers.go:580]     Audit-Id: 0842fae8-1b8d-410d-a2b4-e56195262add
	I1009 23:19:39.496458 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:39.496464 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:39.496470 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:39.496477 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:39.496583 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:39.993748 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:39.993772 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:39.993787 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:39.993795 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:39.996487 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:39.996508 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:39.996517 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:39.996524 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:39 GMT
	I1009 23:19:39.996554 1609109 round_trippers.go:580]     Audit-Id: cbffa018-321d-4ec7-955b-bcf366b2f50e
	I1009 23:19:39.996568 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:39.996574 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:39.996580 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:39.996706 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:40.492922 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:40.492944 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:40.492953 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:40.492961 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:40.495703 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:40.495734 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:40.495743 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:40.495751 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:40.495758 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:40 GMT
	I1009 23:19:40.495764 1609109 round_trippers.go:580]     Audit-Id: 8e9ef224-6fc5-4da4-aa30-4133b75a0ebf
	I1009 23:19:40.495771 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:40.495778 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:40.495905 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:40.993530 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:40.993575 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:40.993585 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:40.993593 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:40.996284 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:40.996313 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:40.996322 1609109 round_trippers.go:580]     Audit-Id: 437ed3be-4946-4ebb-b485-b087508b35b5
	I1009 23:19:40.996330 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:40.996337 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:40.996343 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:40.996352 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:40.996359 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:40 GMT
	I1009 23:19:40.996708 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:40.997123 1609109 node_ready.go:58] node "multinode-717678" has status "Ready":"False"
	I1009 23:19:41.493207 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:41.493231 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:41.493240 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:41.493247 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:41.495932 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:41.495957 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:41.495966 1609109 round_trippers.go:580]     Audit-Id: 3b9ea815-31a2-42f6-97d7-692da91ff2d4
	I1009 23:19:41.495974 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:41.495980 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:41.495986 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:41.495993 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:41.496005 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:41 GMT
	I1009 23:19:41.496195 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:41.993308 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:41.993331 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:41.993342 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:41.993349 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:41.996091 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:41.996121 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:41.996130 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:41.996137 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:41.996144 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:41 GMT
	I1009 23:19:41.996150 1609109 round_trippers.go:580]     Audit-Id: 9011c72b-1bdb-40e4-84f7-131b342ae46b
	I1009 23:19:41.996157 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:41.996167 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:41.996380 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:42.493193 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:42.493218 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:42.493229 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:42.493239 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:42.495902 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:42.495925 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:42.495933 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:42.495940 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:42.495946 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:42.495953 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:42.495959 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:42 GMT
	I1009 23:19:42.495966 1609109 round_trippers.go:580]     Audit-Id: 078c651f-cf9f-4da9-98a1-c0a0b2216fcf
	I1009 23:19:42.496157 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:42.993106 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:42.993129 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:42.993139 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:42.993146 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:42.996054 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:42.996080 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:42.996089 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:42.996095 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:42 GMT
	I1009 23:19:42.996101 1609109 round_trippers.go:580]     Audit-Id: 6d67b4fd-8bac-446e-ae14-6e37184df9b7
	I1009 23:19:42.996108 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:42.996114 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:42.996120 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:42.996341 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:43.492927 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:43.492953 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:43.492962 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:43.492974 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:43.495582 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:43.495605 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:43.495613 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:43.495620 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:43.495626 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:43.495633 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:43.495639 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:43 GMT
	I1009 23:19:43.495645 1609109 round_trippers.go:580]     Audit-Id: 5f38ede4-6007-46ed-ad0f-c617557e18e3
	I1009 23:19:43.495794 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:43.496198 1609109 node_ready.go:58] node "multinode-717678" has status "Ready":"False"
	I1009 23:19:43.993459 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:43.993483 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:43.993493 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:43.993500 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:43.996558 1609109 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:19:43.996579 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:43.996588 1609109 round_trippers.go:580]     Audit-Id: 43dc91f0-4890-45c1-a721-a33b8315dfef
	I1009 23:19:43.996594 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:43.996601 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:43.996607 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:43.996614 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:43.996620 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:43 GMT
	I1009 23:19:43.996749 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:44.492810 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:44.492833 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:44.492842 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:44.492851 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:44.495431 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:44.495460 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:44.495475 1609109 round_trippers.go:580]     Audit-Id: e782c8be-276b-4fe5-8af8-3eb4d2170e97
	I1009 23:19:44.495482 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:44.495489 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:44.495496 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:44.495504 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:44.495515 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:44 GMT
	I1009 23:19:44.495766 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:44.993291 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:44.993315 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:44.993325 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:44.993332 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:44.995950 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:44.995973 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:44.995982 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:44.995988 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:44.995995 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:44 GMT
	I1009 23:19:44.996001 1609109 round_trippers.go:580]     Audit-Id: 80e57db1-9806-4a7f-989e-9f5bcefb3d8a
	I1009 23:19:44.996007 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:44.996014 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:44.996148 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:45.493316 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:45.493392 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:45.493410 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:45.493419 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:45.496343 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:45.496367 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:45.496375 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:45.496382 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:45.496389 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:45.496396 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:45 GMT
	I1009 23:19:45.496402 1609109 round_trippers.go:580]     Audit-Id: cf6b077b-da51-4246-9572-6a9408cf534b
	I1009 23:19:45.496408 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:45.496495 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:45.496898 1609109 node_ready.go:58] node "multinode-717678" has status "Ready":"False"
	I1009 23:19:45.993788 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:45.993857 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:45.993889 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:45.993913 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:45.997288 1609109 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:19:45.997312 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:45.997321 1609109 round_trippers.go:580]     Audit-Id: 2c802d6f-8fba-441c-b4ed-02bbc7db2864
	I1009 23:19:45.997328 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:45.997335 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:45.997341 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:45.997347 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:45.997353 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:45 GMT
	I1009 23:19:45.997478 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:46.493721 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:46.493747 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:46.493757 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:46.493764 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:46.496283 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:46.496321 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:46.496330 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:46.496336 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:46.496343 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:46.496349 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:46 GMT
	I1009 23:19:46.496356 1609109 round_trippers.go:580]     Audit-Id: 91f7df9d-b02a-4bfd-b87b-571e5214af68
	I1009 23:19:46.496362 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:46.496586 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:46.993739 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:46.993763 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:46.993773 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:46.993781 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:46.996426 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:46.996448 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:46.996457 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:46.996464 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:46.996494 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:46 GMT
	I1009 23:19:46.996507 1609109 round_trippers.go:580]     Audit-Id: a1758d78-93cf-4090-8a4d-6f4ab7253f9b
	I1009 23:19:46.996514 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:46.996520 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:46.996808 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:47.493523 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:47.493546 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:47.493556 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:47.493563 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:47.496091 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:47.496112 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:47.496120 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:47.496126 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:47.496132 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:47.496139 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:47.496148 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:47 GMT
	I1009 23:19:47.496154 1609109 round_trippers.go:580]     Audit-Id: ada7c60d-9597-41a1-bea5-4db5d89ab774
	I1009 23:19:47.496276 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:47.993405 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:47.993429 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:47.993438 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:47.993446 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:47.996217 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:47.996244 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:47.996252 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:47.996259 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:47 GMT
	I1009 23:19:47.996266 1609109 round_trippers.go:580]     Audit-Id: 3050c456-0c7d-4b8d-8471-95ae73be8209
	I1009 23:19:47.996273 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:47.996279 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:47.996288 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:47.996456 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:47.996880 1609109 node_ready.go:58] node "multinode-717678" has status "Ready":"False"
	I1009 23:19:48.492925 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:48.492950 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:48.492960 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:48.492971 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:48.495558 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:48.495581 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:48.495590 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:48 GMT
	I1009 23:19:48.495597 1609109 round_trippers.go:580]     Audit-Id: 11543550-e29c-4428-876d-7b3e4840b6a2
	I1009 23:19:48.495603 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:48.495609 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:48.495615 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:48.495622 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:48.495980 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:48.993102 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:48.993128 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:48.993147 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:48.993155 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:48.995608 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:48.995632 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:48.995641 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:48.995648 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:48.995654 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:48 GMT
	I1009 23:19:48.995660 1609109 round_trippers.go:580]     Audit-Id: b02aeba4-feb3-4455-9b2c-6a527a815200
	I1009 23:19:48.995666 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:48.995674 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:48.995947 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:49.493035 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:49.493072 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:49.493085 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:49.493103 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:49.495772 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:49.495799 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:49.495808 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:49.495815 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:49.495821 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:49.495828 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:49.495834 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:49 GMT
	I1009 23:19:49.495846 1609109 round_trippers.go:580]     Audit-Id: 365042d6-2c4f-46bb-97f5-e07935aaa9cb
	I1009 23:19:49.495996 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:49.993171 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:49.993198 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:49.993208 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:49.993220 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:49.996111 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:49.996138 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:49.996147 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:49.996154 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:49.996164 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:49 GMT
	I1009 23:19:49.996170 1609109 round_trippers.go:580]     Audit-Id: 1d19d6b1-8e24-434c-a64c-b579358c4f95
	I1009 23:19:49.996177 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:49.996186 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:49.996382 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:50.493552 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:50.493575 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:50.493585 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:50.493592 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:50.496305 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:50.496328 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:50.496337 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:50.496343 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:50.496350 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:50 GMT
	I1009 23:19:50.496356 1609109 round_trippers.go:580]     Audit-Id: 7ac13b7c-ed3e-4354-8fb7-5985910c89b8
	I1009 23:19:50.496362 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:50.496369 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:50.496556 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:50.496968 1609109 node_ready.go:58] node "multinode-717678" has status "Ready":"False"
	I1009 23:19:50.993414 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:50.993436 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:50.993445 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:50.993452 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:50.995965 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:50.995985 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:50.995993 1609109 round_trippers.go:580]     Audit-Id: 4e83ae20-c652-465a-bb5e-cf2bd9277904
	I1009 23:19:50.996001 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:50.996008 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:50.996015 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:50.996021 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:50.996030 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:50 GMT
	I1009 23:19:50.996186 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:51.492907 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:51.492930 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:51.492940 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:51.492965 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:51.496220 1609109 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:19:51.496244 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:51.496253 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:51.496260 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:51.496266 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:51 GMT
	I1009 23:19:51.496273 1609109 round_trippers.go:580]     Audit-Id: b49035f9-3ae2-46cc-931a-65359781c0b3
	I1009 23:19:51.496279 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:51.496285 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:51.496385 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:51.993552 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:51.993581 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:51.993592 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:51.993600 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:51.996205 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:51.996238 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:51.996247 1609109 round_trippers.go:580]     Audit-Id: 86af7de8-8ab4-429f-b951-1590eb465a61
	I1009 23:19:51.996254 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:51.996261 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:51.996267 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:51.996274 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:51.996285 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:51 GMT
	I1009 23:19:51.996395 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:52.493679 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:52.493705 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:52.493715 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:52.493723 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:52.496261 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:52.496288 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:52.496296 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:52.496304 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:52 GMT
	I1009 23:19:52.496310 1609109 round_trippers.go:580]     Audit-Id: 3ddebe19-152d-4000-a75a-4fb1585a1d37
	I1009 23:19:52.496317 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:52.496323 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:52.496334 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:52.496531 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:52.993574 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:52.993598 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:52.993608 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:52.993615 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:52.996197 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:52.996218 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:52.996226 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:52 GMT
	I1009 23:19:52.996233 1609109 round_trippers.go:580]     Audit-Id: 554efca2-0609-40d8-aba9-1e7283683e1e
	I1009 23:19:52.996239 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:52.996245 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:52.996251 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:52.996257 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:52.996394 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:52.996800 1609109 node_ready.go:58] node "multinode-717678" has status "Ready":"False"
	I1009 23:19:53.493720 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:53.493750 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:53.493760 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:53.493768 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:53.496361 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:53.496388 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:53.496399 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:53.496417 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:53.496430 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:53.496437 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:53 GMT
	I1009 23:19:53.496444 1609109 round_trippers.go:580]     Audit-Id: c61b8bf7-8a55-49f8-b96e-39f23fcdf8ed
	I1009 23:19:53.496453 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:53.496809 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:53.993515 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:53.993542 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:53.993552 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:53.993560 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:53.996107 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:53.996128 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:53.996136 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:53.996143 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:53 GMT
	I1009 23:19:53.996149 1609109 round_trippers.go:580]     Audit-Id: d305d4b0-449f-4d63-b080-ada3b8057fff
	I1009 23:19:53.996155 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:53.996162 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:53.996171 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:53.996296 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:54.493632 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:54.493658 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:54.493667 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:54.493675 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:54.496249 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:54.496271 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:54.496280 1609109 round_trippers.go:580]     Audit-Id: 9724015b-5683-4650-9366-4335bc1bd062
	I1009 23:19:54.496286 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:54.496292 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:54.496299 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:54.496306 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:54.496312 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:54 GMT
	I1009 23:19:54.496437 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:54.993173 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:54.993200 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:54.993210 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:54.993217 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:54.995838 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:54.995860 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:54.995869 1609109 round_trippers.go:580]     Audit-Id: cc64df12-d49b-496d-843b-8d388f29c60e
	I1009 23:19:54.995875 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:54.995881 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:54.995888 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:54.995894 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:54.995900 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:54 GMT
	I1009 23:19:54.996072 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:55.492902 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:55.492926 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:55.492936 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:55.492943 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:55.495512 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:55.495536 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:55.495560 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:55.495568 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:55.495575 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:55.495581 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:55.495588 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:55 GMT
	I1009 23:19:55.495594 1609109 round_trippers.go:580]     Audit-Id: 0ca7aac3-5e98-430c-9021-8bfb20a3fbad
	I1009 23:19:55.495719 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:55.496200 1609109 node_ready.go:58] node "multinode-717678" has status "Ready":"False"
	I1009 23:19:55.993014 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:55.993040 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:55.993050 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:55.993058 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:55.995613 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:55.995644 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:55.995653 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:55.995660 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:55 GMT
	I1009 23:19:55.995667 1609109 round_trippers.go:580]     Audit-Id: 823365ba-290c-4538-a0d0-56894f4d19ce
	I1009 23:19:55.995673 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:55.995679 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:55.995685 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:55.995803 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:56.492918 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:56.492943 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:56.492953 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:56.492960 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:56.495598 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:56.495622 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:56.495630 1609109 round_trippers.go:580]     Audit-Id: e21b7bac-9af1-420e-adcf-93e0fdfcf4e5
	I1009 23:19:56.495638 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:56.495645 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:56.495651 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:56.495667 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:56.495679 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:56 GMT
	I1009 23:19:56.495841 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:56.992991 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:56.993016 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:56.993026 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:56.993034 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:56.995778 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:56.995798 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:56.995807 1609109 round_trippers.go:580]     Audit-Id: 5407b902-956c-455a-922f-0d77988c38c7
	I1009 23:19:56.995814 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:56.995820 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:56.995828 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:56.995834 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:56.995841 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:56 GMT
	I1009 23:19:56.995987 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"317","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1009 23:19:57.492922 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:57.492947 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:57.492959 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:57.492969 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:57.495800 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:57.495830 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:57.495845 1609109 round_trippers.go:580]     Audit-Id: 7ed3351e-4623-4caf-99db-f724f1d66cc4
	I1009 23:19:57.495852 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:57.495858 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:57.495866 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:57.495877 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:57.495888 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:57 GMT
	I1009 23:19:57.496033 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"398","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1009 23:19:57.496467 1609109 node_ready.go:49] node "multinode-717678" has status "Ready":"True"
	I1009 23:19:57.496487 1609109 node_ready.go:38] duration metric: took 32.553485572s waiting for node "multinode-717678" to be "Ready" ...
	I1009 23:19:57.496497 1609109 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 23:19:57.496563 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1009 23:19:57.496577 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:57.496585 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:57.496591 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:57.500418 1609109 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:19:57.500445 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:57.500453 1609109 round_trippers.go:580]     Audit-Id: 919fba5a-08ca-46e2-918c-7ee872052549
	I1009 23:19:57.500460 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:57.500466 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:57.500472 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:57.500479 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:57.500490 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:57 GMT
	I1009 23:19:57.501104 1609109 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"404"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zz9n9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"319f2e3b-8eb5-4d49-bfa6-f7add29b87fd","resourceVersion":"404","creationTimestamp":"2023-10-09T23:19:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2a4f696e-345b-4d7d-9e3b-7f5b62b3a01c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:19:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4f696e-345b-4d7d-9e3b-7f5b62b3a01c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I1009 23:19:57.505235 1609109 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zz9n9" in "kube-system" namespace to be "Ready" ...
	I1009 23:19:57.505345 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zz9n9
	I1009 23:19:57.505362 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:57.505371 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:57.505383 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:57.508337 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:57.508364 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:57.508378 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:57.508385 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:57.508391 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:57.508397 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:57 GMT
	I1009 23:19:57.508404 1609109 round_trippers.go:580]     Audit-Id: 7cdfc7c5-ce93-44f8-93e3-b146c8a734ab
	I1009 23:19:57.508410 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:57.508585 1609109 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zz9n9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"319f2e3b-8eb5-4d49-bfa6-f7add29b87fd","resourceVersion":"404","creationTimestamp":"2023-10-09T23:19:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2a4f696e-345b-4d7d-9e3b-7f5b62b3a01c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:19:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4f696e-345b-4d7d-9e3b-7f5b62b3a01c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1009 23:19:57.509249 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:57.509268 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:57.509277 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:57.509284 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:57.511895 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:57.511947 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:57.512003 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:57.512017 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:57.512025 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:57 GMT
	I1009 23:19:57.512035 1609109 round_trippers.go:580]     Audit-Id: 6963e61c-2f7c-492c-b6f0-45fa107bce19
	I1009 23:19:57.512042 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:57.512070 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:57.512438 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"398","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1009 23:19:57.512988 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zz9n9
	I1009 23:19:57.513012 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:57.513021 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:57.513030 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:57.515789 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:57.515810 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:57.515819 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:57.515826 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:57.515833 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:57.515839 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:57 GMT
	I1009 23:19:57.515846 1609109 round_trippers.go:580]     Audit-Id: 0f122b99-c192-4d24-8fcb-62d7a4779bbf
	I1009 23:19:57.515853 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:57.515995 1609109 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zz9n9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"319f2e3b-8eb5-4d49-bfa6-f7add29b87fd","resourceVersion":"404","creationTimestamp":"2023-10-09T23:19:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2a4f696e-345b-4d7d-9e3b-7f5b62b3a01c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:19:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4f696e-345b-4d7d-9e3b-7f5b62b3a01c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1009 23:19:57.516557 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:57.516573 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:57.516581 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:57.516588 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:57.519290 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:57.519346 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:57.519355 1609109 round_trippers.go:580]     Audit-Id: 4dae3406-2c92-4f5b-a80a-ae1d03a13986
	I1009 23:19:57.519362 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:57.519368 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:57.519374 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:57.519380 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:57.519387 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:57 GMT
	I1009 23:19:57.519540 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"398","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1009 23:19:58.020255 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zz9n9
	I1009 23:19:58.020279 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:58.020289 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:58.020296 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:58.023174 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:58.023202 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:58.023211 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:58.023219 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:58.023225 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:58 GMT
	I1009 23:19:58.023238 1609109 round_trippers.go:580]     Audit-Id: f547f9c9-fb83-4f74-99a0-404a0f68fd5f
	I1009 23:19:58.023245 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:58.023251 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:58.023659 1609109 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zz9n9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"319f2e3b-8eb5-4d49-bfa6-f7add29b87fd","resourceVersion":"404","creationTimestamp":"2023-10-09T23:19:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2a4f696e-345b-4d7d-9e3b-7f5b62b3a01c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:19:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4f696e-345b-4d7d-9e3b-7f5b62b3a01c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1009 23:19:58.024284 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:58.024306 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:58.024316 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:58.024323 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:58.026965 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:58.026993 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:58.027002 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:58.027009 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:58 GMT
	I1009 23:19:58.027016 1609109 round_trippers.go:580]     Audit-Id: e99d2065-dfe8-494d-bff0-84e670269267
	I1009 23:19:58.027022 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:58.027029 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:58.027036 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:58.027222 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"398","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1009 23:19:58.520344 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zz9n9
	I1009 23:19:58.520364 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:58.520374 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:58.520381 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:58.523867 1609109 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:19:58.523891 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:58.523900 1609109 round_trippers.go:580]     Audit-Id: 96a042c1-0740-483b-92a0-44543a0f71a5
	I1009 23:19:58.523913 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:58.523920 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:58.523927 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:58.523938 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:58.523945 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:58 GMT
	I1009 23:19:58.524321 1609109 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zz9n9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"319f2e3b-8eb5-4d49-bfa6-f7add29b87fd","resourceVersion":"414","creationTimestamp":"2023-10-09T23:19:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2a4f696e-345b-4d7d-9e3b-7f5b62b3a01c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:19:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4f696e-345b-4d7d-9e3b-7f5b62b3a01c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1009 23:19:58.524853 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:58.524863 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:58.524872 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:58.524879 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:58.533210 1609109 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1009 23:19:58.533231 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:58.533239 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:58.533246 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:58.533252 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:58.533258 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:58.533264 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:58 GMT
	I1009 23:19:58.533270 1609109 round_trippers.go:580]     Audit-Id: c43f68c4-8cbd-4e52-98d6-41bf70f6a194
	I1009 23:19:58.533810 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"398","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1009 23:19:58.534200 1609109 pod_ready.go:92] pod "coredns-5dd5756b68-zz9n9" in "kube-system" namespace has status "Ready":"True"
	I1009 23:19:58.534212 1609109 pod_ready.go:81] duration metric: took 1.0289365s waiting for pod "coredns-5dd5756b68-zz9n9" in "kube-system" namespace to be "Ready" ...
	I1009 23:19:58.534223 1609109 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-717678" in "kube-system" namespace to be "Ready" ...
	I1009 23:19:58.534293 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-717678
	I1009 23:19:58.534298 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:58.534308 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:58.534315 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:58.540356 1609109 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1009 23:19:58.540426 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:58.540447 1609109 round_trippers.go:580]     Audit-Id: a4de18d6-c407-4262-b126-1bd52ee1c2a6
	I1009 23:19:58.540466 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:58.540483 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:58.540516 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:58.540539 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:58.540560 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:58 GMT
	I1009 23:19:58.541862 1609109 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-717678","namespace":"kube-system","uid":"05c1fa65-d9c1-4a32-b59a-8fb73083f98f","resourceVersion":"386","creationTimestamp":"2023-10-09T23:19:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"77edffb2dcd9b3fb05b40164ea3d4c0e","kubernetes.io/config.mirror":"77edffb2dcd9b3fb05b40164ea3d4c0e","kubernetes.io/config.seen":"2023-10-09T23:19:11.222865162Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:19:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1009 23:19:58.542425 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:58.542442 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:58.542452 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:58.542459 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:58.544934 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:58.544955 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:58.544963 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:58.544970 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:58 GMT
	I1009 23:19:58.544976 1609109 round_trippers.go:580]     Audit-Id: a03e029b-a56e-4de0-b6ab-bb3bf804ffd8
	I1009 23:19:58.544982 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:58.544992 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:58.544998 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:58.546228 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"398","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1009 23:19:58.546631 1609109 pod_ready.go:92] pod "etcd-multinode-717678" in "kube-system" namespace has status "Ready":"True"
	I1009 23:19:58.546649 1609109 pod_ready.go:81] duration metric: took 12.420101ms waiting for pod "etcd-multinode-717678" in "kube-system" namespace to be "Ready" ...
	I1009 23:19:58.546664 1609109 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-717678" in "kube-system" namespace to be "Ready" ...
	I1009 23:19:58.546734 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-717678
	I1009 23:19:58.546746 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:58.546754 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:58.546761 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:58.549286 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:58.549312 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:58.549322 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:58.549328 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:58 GMT
	I1009 23:19:58.549334 1609109 round_trippers.go:580]     Audit-Id: 13a64fd2-bae9-41e6-99f2-0cf277f7c182
	I1009 23:19:58.549340 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:58.549347 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:58.549353 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:58.549662 1609109 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-717678","namespace":"kube-system","uid":"ab6577f1-1934-4fc4-bc32-83c7646ea4ce","resourceVersion":"387","creationTimestamp":"2023-10-09T23:19:11Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e6de9bb12ea3b896eb151fe0950fa9cf","kubernetes.io/config.mirror":"e6de9bb12ea3b896eb151fe0950fa9cf","kubernetes.io/config.seen":"2023-10-09T23:19:11.222871866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:19:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1009 23:19:58.550245 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:58.550262 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:58.550272 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:58.550280 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:58.553569 1609109 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:19:58.553637 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:58.553661 1609109 round_trippers.go:580]     Audit-Id: 1725d909-f144-4c9a-87c6-8651d9d87749
	I1009 23:19:58.553680 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:58.553709 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:58.553733 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:58.553752 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:58.553772 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:58 GMT
	I1009 23:19:58.554409 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"398","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1009 23:19:58.554889 1609109 pod_ready.go:92] pod "kube-apiserver-multinode-717678" in "kube-system" namespace has status "Ready":"True"
	I1009 23:19:58.554923 1609109 pod_ready.go:81] duration metric: took 8.246672ms waiting for pod "kube-apiserver-multinode-717678" in "kube-system" namespace to be "Ready" ...
	I1009 23:19:58.554962 1609109 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-717678" in "kube-system" namespace to be "Ready" ...
	I1009 23:19:58.555057 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-717678
	I1009 23:19:58.555080 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:58.555110 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:58.555158 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:58.573138 1609109 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1009 23:19:58.573166 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:58.573175 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:58 GMT
	I1009 23:19:58.573182 1609109 round_trippers.go:580]     Audit-Id: 25e374a5-3123-4e78-9f5f-ec22741d6689
	I1009 23:19:58.573189 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:58.573195 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:58.573201 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:58.573207 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:58.573358 1609109 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-717678","namespace":"kube-system","uid":"0ab0571d-b106-409a-a094-39501a8718a1","resourceVersion":"388","creationTimestamp":"2023-10-09T23:19:11Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"66e7ef2312034a4e7eda456783a5901a","kubernetes.io/config.mirror":"66e7ef2312034a4e7eda456783a5901a","kubernetes.io/config.seen":"2023-10-09T23:19:11.222873548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:19:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1009 23:19:58.693222 1609109 request.go:629] Waited for 119.307512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:58.693283 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:58.693289 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:58.693304 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:58.693318 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:58.695898 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:58.695925 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:58.695933 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:58 GMT
	I1009 23:19:58.695940 1609109 round_trippers.go:580]     Audit-Id: 42da37c1-3051-4bea-beb9-b6efbb82a3c6
	I1009 23:19:58.695947 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:58.695960 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:58.695970 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:58.695977 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:58.696113 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"398","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1009 23:19:58.696518 1609109 pod_ready.go:92] pod "kube-controller-manager-multinode-717678" in "kube-system" namespace has status "Ready":"True"
	I1009 23:19:58.696535 1609109 pod_ready.go:81] duration metric: took 141.548198ms waiting for pod "kube-controller-manager-multinode-717678" in "kube-system" namespace to be "Ready" ...
	I1009 23:19:58.696556 1609109 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8zh7z" in "kube-system" namespace to be "Ready" ...
	I1009 23:19:58.893998 1609109 request.go:629] Waited for 197.349031ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8zh7z
	I1009 23:19:58.894059 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8zh7z
	I1009 23:19:58.894070 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:58.894083 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:58.894093 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:58.896864 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:58.896890 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:58.896899 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:58.896906 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:58.896912 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:58.896919 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:58 GMT
	I1009 23:19:58.896925 1609109 round_trippers.go:580]     Audit-Id: 80aea3d5-7711-4e6f-b069-4d2e58edbe1e
	I1009 23:19:58.896935 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:58.897196 1609109 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8zh7z","generateName":"kube-proxy-","namespace":"kube-system","uid":"26420832-f9b9-4c98-b7c0-8b3f9d15b4aa","resourceVersion":"379","creationTimestamp":"2023-10-09T23:19:25Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cf961607-8da1-41e8-a9f8-f66778682cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:19:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cf961607-8da1-41e8-a9f8-f66778682cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1009 23:19:59.092978 1609109 request.go:629] Waited for 195.27425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:59.093056 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:59.093086 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:59.093098 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:59.093105 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:59.095976 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:59.096000 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:59.096010 1609109 round_trippers.go:580]     Audit-Id: fccb0c07-20f0-4a71-bdb0-b55a2826889c
	I1009 23:19:59.096030 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:59.096037 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:59.096049 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:59.096057 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:59.096067 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:59 GMT
	I1009 23:19:59.096219 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"398","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1009 23:19:59.096755 1609109 pod_ready.go:92] pod "kube-proxy-8zh7z" in "kube-system" namespace has status "Ready":"True"
	I1009 23:19:59.096783 1609109 pod_ready.go:81] duration metric: took 400.215938ms waiting for pod "kube-proxy-8zh7z" in "kube-system" namespace to be "Ready" ...
	I1009 23:19:59.096805 1609109 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-717678" in "kube-system" namespace to be "Ready" ...
	I1009 23:19:59.292990 1609109 request.go:629] Waited for 196.082746ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-717678
	I1009 23:19:59.293060 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-717678
	I1009 23:19:59.293070 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:59.293079 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:59.293089 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:59.295640 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:59.295700 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:59.295723 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:59.295743 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:59 GMT
	I1009 23:19:59.295773 1609109 round_trippers.go:580]     Audit-Id: 8b210faa-2372-4240-9def-760d2cdbf203
	I1009 23:19:59.295788 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:59.295795 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:59.295802 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:59.295926 1609109 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-717678","namespace":"kube-system","uid":"1efa97e5-8ca4-4dee-9657-510e82694828","resourceVersion":"385","creationTimestamp":"2023-10-09T23:19:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0177a77cb732655e3ea7b32da15d984a","kubernetes.io/config.mirror":"0177a77cb732655e3ea7b32da15d984a","kubernetes.io/config.seen":"2023-10-09T23:19:11.222874639Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:19:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1009 23:19:59.493707 1609109 request.go:629] Waited for 197.342278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:59.493765 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:19:59.493771 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:59.493780 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:59.493791 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:59.496265 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:19:59.496320 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:59.496330 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:59.496337 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:59.496344 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:59.496352 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:59.496362 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:59 GMT
	I1009 23:19:59.496377 1609109 round_trippers.go:580]     Audit-Id: b75f6e1e-6826-49d9-a15c-142fd3dce831
	I1009 23:19:59.496472 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"398","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1009 23:19:59.496872 1609109 pod_ready.go:92] pod "kube-scheduler-multinode-717678" in "kube-system" namespace has status "Ready":"True"
	I1009 23:19:59.496889 1609109 pod_ready.go:81] duration metric: took 400.071666ms waiting for pod "kube-scheduler-multinode-717678" in "kube-system" namespace to be "Ready" ...
	I1009 23:19:59.496901 1609109 pod_ready.go:38] duration metric: took 2.000390356s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 23:19:59.496921 1609109 api_server.go:52] waiting for apiserver process to appear ...
	I1009 23:19:59.496979 1609109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 23:19:59.508863 1609109 command_runner.go:130] > 1243
	I1009 23:19:59.510793 1609109 api_server.go:72] duration metric: took 34.664933928s to wait for apiserver process to appear ...
	I1009 23:19:59.510825 1609109 api_server.go:88] waiting for apiserver healthz status ...
	I1009 23:19:59.510848 1609109 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1009 23:19:59.520018 1609109 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1009 23:19:59.520088 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1009 23:19:59.520100 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:59.520109 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:59.520116 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:59.521361 1609109 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1009 23:19:59.521381 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:59.521425 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:59.521432 1609109 round_trippers.go:580]     Content-Length: 263
	I1009 23:19:59.521438 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:59 GMT
	I1009 23:19:59.521447 1609109 round_trippers.go:580]     Audit-Id: f05309a2-0be2-4b4f-bc05-0775565bf6c4
	I1009 23:19:59.521454 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:59.521463 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:59.521475 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:59.521499 1609109 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I1009 23:19:59.521600 1609109 api_server.go:141] control plane version: v1.28.2
	I1009 23:19:59.521617 1609109 api_server.go:131] duration metric: took 10.785454ms to wait for apiserver health ...
	I1009 23:19:59.521624 1609109 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 23:19:59.692953 1609109 request.go:629] Waited for 171.264164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1009 23:19:59.693014 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1009 23:19:59.693026 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:59.693035 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:59.693042 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:59.696668 1609109 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:19:59.696777 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:59.696846 1609109 round_trippers.go:580]     Audit-Id: ab9d69f5-c50f-4c88-b370-2afd08a8e1c6
	I1009 23:19:59.696872 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:59.696891 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:59.696910 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:59.696931 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:59.696959 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:59 GMT
	I1009 23:19:59.697389 1609109 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zz9n9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"319f2e3b-8eb5-4d49-bfa6-f7add29b87fd","resourceVersion":"414","creationTimestamp":"2023-10-09T23:19:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2a4f696e-345b-4d7d-9e3b-7f5b62b3a01c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:19:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4f696e-345b-4d7d-9e3b-7f5b62b3a01c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1009 23:19:59.699766 1609109 system_pods.go:59] 8 kube-system pods found
	I1009 23:19:59.699795 1609109 system_pods.go:61] "coredns-5dd5756b68-zz9n9" [319f2e3b-8eb5-4d49-bfa6-f7add29b87fd] Running
	I1009 23:19:59.699802 1609109 system_pods.go:61] "etcd-multinode-717678" [05c1fa65-d9c1-4a32-b59a-8fb73083f98f] Running
	I1009 23:19:59.699808 1609109 system_pods.go:61] "kindnet-mr6j6" [6f90c4c5-a8d7-4d81-85be-abc93edf1b46] Running
	I1009 23:19:59.699813 1609109 system_pods.go:61] "kube-apiserver-multinode-717678" [ab6577f1-1934-4fc4-bc32-83c7646ea4ce] Running
	I1009 23:19:59.699820 1609109 system_pods.go:61] "kube-controller-manager-multinode-717678" [0ab0571d-b106-409a-a094-39501a8718a1] Running
	I1009 23:19:59.699831 1609109 system_pods.go:61] "kube-proxy-8zh7z" [26420832-f9b9-4c98-b7c0-8b3f9d15b4aa] Running
	I1009 23:19:59.699840 1609109 system_pods.go:61] "kube-scheduler-multinode-717678" [1efa97e5-8ca4-4dee-9657-510e82694828] Running
	I1009 23:19:59.699845 1609109 system_pods.go:61] "storage-provisioner" [832d43a3-110f-47e7-a82a-e4fbfe107d43] Running
	I1009 23:19:59.699853 1609109 system_pods.go:74] duration metric: took 178.223029ms to wait for pod list to return data ...
	I1009 23:19:59.699863 1609109 default_sa.go:34] waiting for default service account to be created ...
	I1009 23:19:59.893267 1609109 request.go:629] Waited for 193.327561ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1009 23:19:59.893333 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1009 23:19:59.893342 1609109 round_trippers.go:469] Request Headers:
	I1009 23:19:59.893352 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:19:59.893364 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:19:59.896801 1609109 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:19:59.896834 1609109 round_trippers.go:577] Response Headers:
	I1009 23:19:59.896844 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:19:59.896851 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:19:59.896858 1609109 round_trippers.go:580]     Content-Length: 261
	I1009 23:19:59.896864 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:19:59 GMT
	I1009 23:19:59.896870 1609109 round_trippers.go:580]     Audit-Id: 9a0c9766-b104-4200-a6a2-59dbc3622e55
	I1009 23:19:59.896877 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:19:59.896886 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:19:59.896963 1609109 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"ce5976a0-30e3-49c6-88fe-d61d545e5212","resourceVersion":"304","creationTimestamp":"2023-10-09T23:19:24Z"}}]}
	I1009 23:19:59.897222 1609109 default_sa.go:45] found service account: "default"
	I1009 23:19:59.897242 1609109 default_sa.go:55] duration metric: took 197.372326ms for default service account to be created ...
	I1009 23:19:59.897253 1609109 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 23:20:00.093530 1609109 request.go:629] Waited for 196.183398ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1009 23:20:00.093604 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1009 23:20:00.093610 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:00.093619 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:00.093626 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:00.100551 1609109 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1009 23:20:00.100575 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:00.100584 1609109 round_trippers.go:580]     Audit-Id: 78e6d844-5206-4025-bfc7-e54dff22d94e
	I1009 23:20:00.100591 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:00.100598 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:00.100604 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:00.100615 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:00.100621 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:00 GMT
	I1009 23:20:00.102377 1609109 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"421"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zz9n9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"319f2e3b-8eb5-4d49-bfa6-f7add29b87fd","resourceVersion":"414","creationTimestamp":"2023-10-09T23:19:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2a4f696e-345b-4d7d-9e3b-7f5b62b3a01c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:19:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4f696e-345b-4d7d-9e3b-7f5b62b3a01c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I1009 23:20:00.104891 1609109 system_pods.go:86] 8 kube-system pods found
	I1009 23:20:00.104925 1609109 system_pods.go:89] "coredns-5dd5756b68-zz9n9" [319f2e3b-8eb5-4d49-bfa6-f7add29b87fd] Running
	I1009 23:20:00.104934 1609109 system_pods.go:89] "etcd-multinode-717678" [05c1fa65-d9c1-4a32-b59a-8fb73083f98f] Running
	I1009 23:20:00.104988 1609109 system_pods.go:89] "kindnet-mr6j6" [6f90c4c5-a8d7-4d81-85be-abc93edf1b46] Running
	I1009 23:20:00.105002 1609109 system_pods.go:89] "kube-apiserver-multinode-717678" [ab6577f1-1934-4fc4-bc32-83c7646ea4ce] Running
	I1009 23:20:00.105009 1609109 system_pods.go:89] "kube-controller-manager-multinode-717678" [0ab0571d-b106-409a-a094-39501a8718a1] Running
	I1009 23:20:00.105015 1609109 system_pods.go:89] "kube-proxy-8zh7z" [26420832-f9b9-4c98-b7c0-8b3f9d15b4aa] Running
	I1009 23:20:00.105024 1609109 system_pods.go:89] "kube-scheduler-multinode-717678" [1efa97e5-8ca4-4dee-9657-510e82694828] Running
	I1009 23:20:00.105029 1609109 system_pods.go:89] "storage-provisioner" [832d43a3-110f-47e7-a82a-e4fbfe107d43] Running
	I1009 23:20:00.105039 1609109 system_pods.go:126] duration metric: took 207.780177ms to wait for k8s-apps to be running ...
	I1009 23:20:00.105052 1609109 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 23:20:00.105137 1609109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 23:20:00.161219 1609109 system_svc.go:56] duration metric: took 56.14709ms WaitForService to wait for kubelet.
	I1009 23:20:00.161248 1609109 kubeadm.go:581] duration metric: took 35.315393069s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1009 23:20:00.161272 1609109 node_conditions.go:102] verifying NodePressure condition ...
	I1009 23:20:00.294481 1609109 request.go:629] Waited for 133.078378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1009 23:20:00.294669 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1009 23:20:00.294699 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:00.294736 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:00.294758 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:00.318807 1609109 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I1009 23:20:00.318896 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:00.318923 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:00.318940 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:00 GMT
	I1009 23:20:00.318948 1609109 round_trippers.go:580]     Audit-Id: 10044aa0-f48d-4aa0-b5e8-8311f9add5f2
	I1009 23:20:00.318970 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:00.318980 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:00.318988 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:00.319198 1609109 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"421"},"items":[{"metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"398","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I1009 23:20:00.319853 1609109 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 23:20:00.319896 1609109 node_conditions.go:123] node cpu capacity is 2
	I1009 23:20:00.319912 1609109 node_conditions.go:105] duration metric: took 158.633438ms to run NodePressure ...
	I1009 23:20:00.319925 1609109 start.go:228] waiting for startup goroutines ...
	I1009 23:20:00.319933 1609109 start.go:233] waiting for cluster config update ...
	I1009 23:20:00.319944 1609109 start.go:242] writing updated cluster config ...
	I1009 23:20:00.334140 1609109 out.go:177] 
	I1009 23:20:00.339103 1609109 config.go:182] Loaded profile config "multinode-717678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1009 23:20:00.339327 1609109 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/config.json ...
	I1009 23:20:00.345381 1609109 out.go:177] * Starting worker node multinode-717678-m02 in cluster multinode-717678
	I1009 23:20:00.348701 1609109 cache.go:122] Beginning downloading kic base image for docker with crio
	I1009 23:20:00.351759 1609109 out.go:177] * Pulling base image ...
	I1009 23:20:00.354535 1609109 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1009 23:20:00.354640 1609109 cache.go:57] Caching tarball of preloaded images
	I1009 23:20:00.354596 1609109 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1009 23:20:00.354988 1609109 preload.go:174] Found /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 23:20:00.355001 1609109 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1009 23:20:00.355103 1609109 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/config.json ...
	I1009 23:20:00.450254 1609109 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1009 23:20:00.450288 1609109 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1009 23:20:00.450305 1609109 cache.go:195] Successfully downloaded all kic artifacts
	I1009 23:20:00.450343 1609109 start.go:365] acquiring machines lock for multinode-717678-m02: {Name:mkbc1fef0a3bf9e80b41b12d13a1a076698920fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:20:00.450479 1609109 start.go:369] acquired machines lock for "multinode-717678-m02" in 113.232µs
	I1009 23:20:00.450510 1609109 start.go:93] Provisioning new machine with config: &{Name:multinode-717678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-717678 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1009 23:20:00.450604 1609109 start.go:125] createHost starting for "m02" (driver="docker")
	I1009 23:20:00.454062 1609109 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1009 23:20:00.454221 1609109 start.go:159] libmachine.API.Create for "multinode-717678" (driver="docker")
	I1009 23:20:00.454305 1609109 client.go:168] LocalClient.Create starting
	I1009 23:20:00.454383 1609109 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem
	I1009 23:20:00.454448 1609109 main.go:141] libmachine: Decoding PEM data...
	I1009 23:20:00.454465 1609109 main.go:141] libmachine: Parsing certificate...
	I1009 23:20:00.454601 1609109 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem
	I1009 23:20:00.454637 1609109 main.go:141] libmachine: Decoding PEM data...
	I1009 23:20:00.454653 1609109 main.go:141] libmachine: Parsing certificate...
	I1009 23:20:00.454944 1609109 cli_runner.go:164] Run: docker network inspect multinode-717678 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 23:20:00.526366 1609109 network_create.go:77] Found existing network {name:multinode-717678 subnet:0x40032cd890 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1009 23:20:00.526410 1609109 kic.go:118] calculated static IP "192.168.58.3" for the "multinode-717678-m02" container
	I1009 23:20:00.526508 1609109 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 23:20:00.557599 1609109 cli_runner.go:164] Run: docker volume create multinode-717678-m02 --label name.minikube.sigs.k8s.io=multinode-717678-m02 --label created_by.minikube.sigs.k8s.io=true
	I1009 23:20:00.630812 1609109 oci.go:103] Successfully created a docker volume multinode-717678-m02
	I1009 23:20:00.630909 1609109 cli_runner.go:164] Run: docker run --rm --name multinode-717678-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-717678-m02 --entrypoint /usr/bin/test -v multinode-717678-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1009 23:20:01.257433 1609109 oci.go:107] Successfully prepared a docker volume multinode-717678-m02
	I1009 23:20:01.257506 1609109 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1009 23:20:01.257535 1609109 kic.go:191] Starting extracting preloaded images to volume ...
	I1009 23:20:01.257650 1609109 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-717678-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 23:20:05.565181 1609109 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-717678-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.307484753s)
	I1009 23:20:05.567305 1609109 kic.go:200] duration metric: took 4.309762 seconds to extract preloaded images to volume
	W1009 23:20:05.567479 1609109 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 23:20:05.567591 1609109 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 23:20:05.656976 1609109 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-717678-m02 --name multinode-717678-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-717678-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-717678-m02 --network multinode-717678 --ip 192.168.58.3 --volume multinode-717678-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1009 23:20:06.056225 1609109 cli_runner.go:164] Run: docker container inspect multinode-717678-m02 --format={{.State.Running}}
	I1009 23:20:06.084265 1609109 cli_runner.go:164] Run: docker container inspect multinode-717678-m02 --format={{.State.Status}}
	I1009 23:20:06.113766 1609109 cli_runner.go:164] Run: docker exec multinode-717678-m02 stat /var/lib/dpkg/alternatives/iptables
	I1009 23:20:06.221042 1609109 oci.go:144] the created container "multinode-717678-m02" has a running status.
	I1009 23:20:06.221075 1609109 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678-m02/id_rsa...
	I1009 23:20:06.775346 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1009 23:20:06.775472 1609109 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 23:20:06.819007 1609109 cli_runner.go:164] Run: docker container inspect multinode-717678-m02 --format={{.State.Status}}
	I1009 23:20:06.864350 1609109 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 23:20:06.864376 1609109 kic_runner.go:114] Args: [docker exec --privileged multinode-717678-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 23:20:06.982304 1609109 cli_runner.go:164] Run: docker container inspect multinode-717678-m02 --format={{.State.Status}}
	I1009 23:20:07.018271 1609109 machine.go:88] provisioning docker machine ...
	I1009 23:20:07.018301 1609109 ubuntu.go:169] provisioning hostname "multinode-717678-m02"
	I1009 23:20:07.018371 1609109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678-m02
	I1009 23:20:07.053415 1609109 main.go:141] libmachine: Using SSH client type: native
	I1009 23:20:07.053829 1609109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34439 <nil> <nil>}
	I1009 23:20:07.053842 1609109 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-717678-m02 && echo "multinode-717678-m02" | sudo tee /etc/hostname
	I1009 23:20:07.275232 1609109 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-717678-m02
	
	I1009 23:20:07.275307 1609109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678-m02
	I1009 23:20:07.300182 1609109 main.go:141] libmachine: Using SSH client type: native
	I1009 23:20:07.300685 1609109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34439 <nil> <nil>}
	I1009 23:20:07.300707 1609109 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-717678-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-717678-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-717678-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 23:20:07.448489 1609109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 23:20:07.448559 1609109 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17375-1537865/.minikube CaCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17375-1537865/.minikube}
	I1009 23:20:07.448588 1609109 ubuntu.go:177] setting up certificates
	I1009 23:20:07.448607 1609109 provision.go:83] configureAuth start
	I1009 23:20:07.448697 1609109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-717678-m02
	I1009 23:20:07.482781 1609109 provision.go:138] copyHostCerts
	I1009 23:20:07.482886 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem
	I1009 23:20:07.482924 1609109 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem, removing ...
	I1009 23:20:07.482932 1609109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem
	I1009 23:20:07.483016 1609109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem (1679 bytes)
	I1009 23:20:07.483100 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem
	I1009 23:20:07.483226 1609109 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem, removing ...
	I1009 23:20:07.483235 1609109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem
	I1009 23:20:07.483280 1609109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem (1078 bytes)
	I1009 23:20:07.483348 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem
	I1009 23:20:07.483364 1609109 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem, removing ...
	I1009 23:20:07.483368 1609109 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem
	I1009 23:20:07.483394 1609109 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem (1123 bytes)
	I1009 23:20:07.483436 1609109 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem org=jenkins.multinode-717678-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-717678-m02]
	I1009 23:20:07.847083 1609109 provision.go:172] copyRemoteCerts
	I1009 23:20:07.847173 1609109 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 23:20:07.847223 1609109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678-m02
	I1009 23:20:07.866569 1609109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34439 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678-m02/id_rsa Username:docker}
	I1009 23:20:07.966593 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1009 23:20:07.966660 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 23:20:07.997725 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1009 23:20:07.997791 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1009 23:20:08.039192 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1009 23:20:08.039276 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 23:20:08.071227 1609109 provision.go:86] duration metric: configureAuth took 622.579463ms
	I1009 23:20:08.071254 1609109 ubuntu.go:193] setting minikube options for container-runtime
	I1009 23:20:08.071456 1609109 config.go:182] Loaded profile config "multinode-717678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1009 23:20:08.071570 1609109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678-m02
	I1009 23:20:08.091810 1609109 main.go:141] libmachine: Using SSH client type: native
	I1009 23:20:08.092225 1609109 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34439 <nil> <nil>}
	I1009 23:20:08.092246 1609109 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 23:20:08.351601 1609109 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 23:20:08.351624 1609109 machine.go:91] provisioned docker machine in 1.333334763s
	I1009 23:20:08.351635 1609109 client.go:171] LocalClient.Create took 7.897323765s
	I1009 23:20:08.351652 1609109 start.go:167] duration metric: libmachine.API.Create for "multinode-717678" took 7.897431925s
	I1009 23:20:08.351660 1609109 start.go:300] post-start starting for "multinode-717678-m02" (driver="docker")
	I1009 23:20:08.351670 1609109 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 23:20:08.351736 1609109 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 23:20:08.351784 1609109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678-m02
	I1009 23:20:08.371817 1609109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34439 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678-m02/id_rsa Username:docker}
	I1009 23:20:08.470406 1609109 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 23:20:08.474783 1609109 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1009 23:20:08.474802 1609109 command_runner.go:130] > NAME="Ubuntu"
	I1009 23:20:08.474809 1609109 command_runner.go:130] > VERSION_ID="22.04"
	I1009 23:20:08.474816 1609109 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1009 23:20:08.474821 1609109 command_runner.go:130] > VERSION_CODENAME=jammy
	I1009 23:20:08.474826 1609109 command_runner.go:130] > ID=ubuntu
	I1009 23:20:08.474831 1609109 command_runner.go:130] > ID_LIKE=debian
	I1009 23:20:08.474837 1609109 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1009 23:20:08.474843 1609109 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1009 23:20:08.474856 1609109 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1009 23:20:08.474864 1609109 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1009 23:20:08.474871 1609109 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1009 23:20:08.474928 1609109 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 23:20:08.474952 1609109 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1009 23:20:08.474962 1609109 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1009 23:20:08.474969 1609109 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1009 23:20:08.474979 1609109 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-1537865/.minikube/addons for local assets ...
	I1009 23:20:08.475039 1609109 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-1537865/.minikube/files for local assets ...
	I1009 23:20:08.475162 1609109 filesync.go:149] local asset: /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem -> 15432152.pem in /etc/ssl/certs
	I1009 23:20:08.475170 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem -> /etc/ssl/certs/15432152.pem
	I1009 23:20:08.475271 1609109 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 23:20:08.486276 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem --> /etc/ssl/certs/15432152.pem (1708 bytes)
	I1009 23:20:08.516602 1609109 start.go:303] post-start completed in 164.926577ms
	I1009 23:20:08.516973 1609109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-717678-m02
	I1009 23:20:08.535768 1609109 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/config.json ...
	I1009 23:20:08.536066 1609109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 23:20:08.536116 1609109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678-m02
	I1009 23:20:08.554379 1609109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34439 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678-m02/id_rsa Username:docker}
	I1009 23:20:08.649211 1609109 command_runner.go:130] > 14%!
	(MISSING)I1009 23:20:08.649290 1609109 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 23:20:08.654728 1609109 command_runner.go:130] > 168G
	I1009 23:20:08.655174 1609109 start.go:128] duration metric: createHost completed in 8.204559175s
	I1009 23:20:08.655193 1609109 start.go:83] releasing machines lock for "multinode-717678-m02", held for 8.204704849s
	I1009 23:20:08.655267 1609109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-717678-m02
	I1009 23:20:08.676405 1609109 out.go:177] * Found network options:
	I1009 23:20:08.678345 1609109 out.go:177]   - NO_PROXY=192.168.58.2
	W1009 23:20:08.680207 1609109 proxy.go:119] fail to check proxy env: Error ip not in block
	W1009 23:20:08.680247 1609109 proxy.go:119] fail to check proxy env: Error ip not in block
	I1009 23:20:08.680318 1609109 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 23:20:08.680371 1609109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678-m02
	I1009 23:20:08.680635 1609109 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 23:20:08.680684 1609109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678-m02
	I1009 23:20:08.700894 1609109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34439 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678-m02/id_rsa Username:docker}
	I1009 23:20:08.716366 1609109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34439 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678-m02/id_rsa Username:docker}
	I1009 23:20:08.956675 1609109 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 23:20:08.967603 1609109 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1009 23:20:08.971103 1609109 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1009 23:20:08.971147 1609109 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1009 23:20:08.971156 1609109 command_runner.go:130] > Device: b3h/179d	Inode: 1304922     Links: 1
	I1009 23:20:08.971164 1609109 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 23:20:08.971171 1609109 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1009 23:20:08.971177 1609109 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1009 23:20:08.971184 1609109 command_runner.go:130] > Change: 2023-10-09 22:55:09.641389644 +0000
	I1009 23:20:08.971190 1609109 command_runner.go:130] >  Birth: 2023-10-09 22:55:09.641389644 +0000
	I1009 23:20:08.971267 1609109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 23:20:08.995065 1609109 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1009 23:20:08.995278 1609109 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 23:20:09.045617 1609109 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1009 23:20:09.045805 1609109 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1009 23:20:09.045837 1609109 start.go:472] detecting cgroup driver to use...
	I1009 23:20:09.045902 1609109 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1009 23:20:09.046025 1609109 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 23:20:09.066129 1609109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 23:20:09.080945 1609109 docker.go:198] disabling cri-docker service (if available) ...
	I1009 23:20:09.081023 1609109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 23:20:09.099660 1609109 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 23:20:09.119498 1609109 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 23:20:09.225363 1609109 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 23:20:09.340164 1609109 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1009 23:20:09.340198 1609109 docker.go:214] disabling docker service ...
	I1009 23:20:09.340253 1609109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 23:20:09.363705 1609109 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 23:20:09.379162 1609109 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 23:20:09.474956 1609109 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1009 23:20:09.475034 1609109 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 23:20:09.489340 1609109 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1009 23:20:09.589596 1609109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 23:20:09.606994 1609109 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 23:20:09.626136 1609109 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1009 23:20:09.627620 1609109 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1009 23:20:09.627713 1609109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 23:20:09.639707 1609109 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 23:20:09.639816 1609109 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 23:20:09.652041 1609109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 23:20:09.665593 1609109 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 23:20:09.677591 1609109 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 23:20:09.688753 1609109 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 23:20:09.697854 1609109 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1009 23:20:09.699009 1609109 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 23:20:09.709213 1609109 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:20:09.809989 1609109 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 23:20:09.943467 1609109 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 23:20:09.943536 1609109 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 23:20:09.948524 1609109 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1009 23:20:09.948549 1609109 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1009 23:20:09.948558 1609109 command_runner.go:130] > Device: bch/188d	Inode: 190         Links: 1
	I1009 23:20:09.948566 1609109 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 23:20:09.948572 1609109 command_runner.go:130] > Access: 2023-10-09 23:20:09.926125159 +0000
	I1009 23:20:09.948579 1609109 command_runner.go:130] > Modify: 2023-10-09 23:20:09.926125159 +0000
	I1009 23:20:09.948585 1609109 command_runner.go:130] > Change: 2023-10-09 23:20:09.926125159 +0000
	I1009 23:20:09.948594 1609109 command_runner.go:130] >  Birth: -
	I1009 23:20:09.948607 1609109 start.go:540] Will wait 60s for crictl version
	I1009 23:20:09.948667 1609109 ssh_runner.go:195] Run: which crictl
	I1009 23:20:09.956085 1609109 command_runner.go:130] > /usr/bin/crictl
	I1009 23:20:09.957615 1609109 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 23:20:10.002253 1609109 command_runner.go:130] > Version:  0.1.0
	I1009 23:20:10.002640 1609109 command_runner.go:130] > RuntimeName:  cri-o
	I1009 23:20:10.002906 1609109 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1009 23:20:10.003160 1609109 command_runner.go:130] > RuntimeApiVersion:  v1
	I1009 23:20:10.015450 1609109 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1009 23:20:10.015637 1609109 ssh_runner.go:195] Run: crio --version
	I1009 23:20:10.080409 1609109 command_runner.go:130] > crio version 1.24.6
	I1009 23:20:10.080473 1609109 command_runner.go:130] > Version:          1.24.6
	I1009 23:20:10.080499 1609109 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1009 23:20:10.080517 1609109 command_runner.go:130] > GitTreeState:     clean
	I1009 23:20:10.080550 1609109 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1009 23:20:10.080571 1609109 command_runner.go:130] > GoVersion:        go1.18.2
	I1009 23:20:10.080590 1609109 command_runner.go:130] > Compiler:         gc
	I1009 23:20:10.080606 1609109 command_runner.go:130] > Platform:         linux/arm64
	I1009 23:20:10.080623 1609109 command_runner.go:130] > Linkmode:         dynamic
	I1009 23:20:10.080654 1609109 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1009 23:20:10.080677 1609109 command_runner.go:130] > SeccompEnabled:   true
	I1009 23:20:10.080695 1609109 command_runner.go:130] > AppArmorEnabled:  false
	I1009 23:20:10.082921 1609109 ssh_runner.go:195] Run: crio --version
	I1009 23:20:10.133645 1609109 command_runner.go:130] > crio version 1.24.6
	I1009 23:20:10.133721 1609109 command_runner.go:130] > Version:          1.24.6
	I1009 23:20:10.133745 1609109 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1009 23:20:10.133764 1609109 command_runner.go:130] > GitTreeState:     clean
	I1009 23:20:10.133797 1609109 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1009 23:20:10.133823 1609109 command_runner.go:130] > GoVersion:        go1.18.2
	I1009 23:20:10.133841 1609109 command_runner.go:130] > Compiler:         gc
	I1009 23:20:10.133860 1609109 command_runner.go:130] > Platform:         linux/arm64
	I1009 23:20:10.133882 1609109 command_runner.go:130] > Linkmode:         dynamic
	I1009 23:20:10.133919 1609109 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1009 23:20:10.133939 1609109 command_runner.go:130] > SeccompEnabled:   true
	I1009 23:20:10.133957 1609109 command_runner.go:130] > AppArmorEnabled:  false
	I1009 23:20:10.139021 1609109 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1009 23:20:10.142082 1609109 out.go:177]   - env NO_PROXY=192.168.58.2
	I1009 23:20:10.144619 1609109 cli_runner.go:164] Run: docker network inspect multinode-717678 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 23:20:10.163269 1609109 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1009 23:20:10.168400 1609109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 23:20:10.183309 1609109 certs.go:56] Setting up /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678 for IP: 192.168.58.3
	I1009 23:20:10.183340 1609109 certs.go:190] acquiring lock for shared ca certs: {Name:mk430c21a56d31b4f15423923c65864a3e3a3c9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:20:10.183479 1609109 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.key
	I1009 23:20:10.183531 1609109 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.key
	I1009 23:20:10.183546 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1009 23:20:10.183561 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1009 23:20:10.183574 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1009 23:20:10.183590 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1009 23:20:10.183649 1609109 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/1543215.pem (1338 bytes)
	W1009 23:20:10.183684 1609109 certs.go:433] ignoring /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/1543215_empty.pem, impossibly tiny 0 bytes
	I1009 23:20:10.183697 1609109 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 23:20:10.183724 1609109 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem (1078 bytes)
	I1009 23:20:10.183750 1609109 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem (1123 bytes)
	I1009 23:20:10.183779 1609109 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem (1679 bytes)
	I1009 23:20:10.183832 1609109 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem (1708 bytes)
	I1009 23:20:10.183864 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/1543215.pem -> /usr/share/ca-certificates/1543215.pem
	I1009 23:20:10.183881 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem -> /usr/share/ca-certificates/15432152.pem
	I1009 23:20:10.183894 1609109 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:20:10.184243 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 23:20:10.218639 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 23:20:10.248905 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 23:20:10.278672 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 23:20:10.309810 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/1543215.pem --> /usr/share/ca-certificates/1543215.pem (1338 bytes)
	I1009 23:20:10.339537 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem --> /usr/share/ca-certificates/15432152.pem (1708 bytes)
	I1009 23:20:10.368511 1609109 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 23:20:10.398475 1609109 ssh_runner.go:195] Run: openssl version
	I1009 23:20:10.405390 1609109 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1009 23:20:10.405730 1609109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1543215.pem && ln -fs /usr/share/ca-certificates/1543215.pem /etc/ssl/certs/1543215.pem"
	I1009 23:20:10.417383 1609109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1543215.pem
	I1009 23:20:10.421913 1609109 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct  9 23:03 /usr/share/ca-certificates/1543215.pem
	I1009 23:20:10.422197 1609109 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  9 23:03 /usr/share/ca-certificates/1543215.pem
	I1009 23:20:10.422259 1609109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1543215.pem
	I1009 23:20:10.430543 1609109 command_runner.go:130] > 51391683
	I1009 23:20:10.430624 1609109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1543215.pem /etc/ssl/certs/51391683.0"
	I1009 23:20:10.442541 1609109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15432152.pem && ln -fs /usr/share/ca-certificates/15432152.pem /etc/ssl/certs/15432152.pem"
	I1009 23:20:10.454230 1609109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15432152.pem
	I1009 23:20:10.458858 1609109 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct  9 23:03 /usr/share/ca-certificates/15432152.pem
	I1009 23:20:10.458965 1609109 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  9 23:03 /usr/share/ca-certificates/15432152.pem
	I1009 23:20:10.459051 1609109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15432152.pem
	I1009 23:20:10.467682 1609109 command_runner.go:130] > 3ec20f2e
	I1009 23:20:10.468118 1609109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15432152.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 23:20:10.479762 1609109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 23:20:10.491499 1609109 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:20:10.496090 1609109 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct  9 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:20:10.496191 1609109 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  9 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:20:10.496267 1609109 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:20:10.504915 1609109 command_runner.go:130] > b5213941
	I1009 23:20:10.505009 1609109 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 23:20:10.516895 1609109 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1009 23:20:10.521362 1609109 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1009 23:20:10.521398 1609109 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1009 23:20:10.521493 1609109 ssh_runner.go:195] Run: crio config
	I1009 23:20:10.576411 1609109 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1009 23:20:10.576439 1609109 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1009 23:20:10.576448 1609109 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1009 23:20:10.576452 1609109 command_runner.go:130] > #
	I1009 23:20:10.576461 1609109 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1009 23:20:10.576469 1609109 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1009 23:20:10.576477 1609109 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1009 23:20:10.576486 1609109 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1009 23:20:10.576493 1609109 command_runner.go:130] > # reload'.
	I1009 23:20:10.576501 1609109 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1009 23:20:10.576512 1609109 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1009 23:20:10.576521 1609109 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1009 23:20:10.576531 1609109 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1009 23:20:10.576535 1609109 command_runner.go:130] > [crio]
	I1009 23:20:10.576549 1609109 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1009 23:20:10.576562 1609109 command_runner.go:130] > # containers images, in this directory.
	I1009 23:20:10.577370 1609109 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1009 23:20:10.577389 1609109 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1009 23:20:10.578041 1609109 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1009 23:20:10.578060 1609109 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1009 23:20:10.578068 1609109 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1009 23:20:10.578715 1609109 command_runner.go:130] > # storage_driver = "vfs"
	I1009 23:20:10.578733 1609109 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1009 23:20:10.578741 1609109 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1009 23:20:10.579059 1609109 command_runner.go:130] > # storage_option = [
	I1009 23:20:10.579445 1609109 command_runner.go:130] > # ]
	I1009 23:20:10.579463 1609109 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1009 23:20:10.579471 1609109 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1009 23:20:10.580101 1609109 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1009 23:20:10.580120 1609109 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1009 23:20:10.580128 1609109 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1009 23:20:10.580134 1609109 command_runner.go:130] > # always happen on a node reboot
	I1009 23:20:10.580768 1609109 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1009 23:20:10.580785 1609109 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1009 23:20:10.580793 1609109 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1009 23:20:10.580807 1609109 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1009 23:20:10.581478 1609109 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1009 23:20:10.581497 1609109 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1009 23:20:10.581508 1609109 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1009 23:20:10.582203 1609109 command_runner.go:130] > # internal_wipe = true
	I1009 23:20:10.582219 1609109 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1009 23:20:10.582237 1609109 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1009 23:20:10.582250 1609109 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1009 23:20:10.582911 1609109 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1009 23:20:10.582928 1609109 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1009 23:20:10.582934 1609109 command_runner.go:130] > [crio.api]
	I1009 23:20:10.582940 1609109 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1009 23:20:10.583610 1609109 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1009 23:20:10.583630 1609109 command_runner.go:130] > # IP address on which the stream server will listen.
	I1009 23:20:10.584268 1609109 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1009 23:20:10.584285 1609109 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1009 23:20:10.584293 1609109 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1009 23:20:10.584930 1609109 command_runner.go:130] > # stream_port = "0"
	I1009 23:20:10.584946 1609109 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1009 23:20:10.585594 1609109 command_runner.go:130] > # stream_enable_tls = false
	I1009 23:20:10.585611 1609109 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1009 23:20:10.586095 1609109 command_runner.go:130] > # stream_idle_timeout = ""
	I1009 23:20:10.586112 1609109 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1009 23:20:10.586120 1609109 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1009 23:20:10.586126 1609109 command_runner.go:130] > # minutes.
	I1009 23:20:10.586621 1609109 command_runner.go:130] > # stream_tls_cert = ""
	I1009 23:20:10.586637 1609109 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1009 23:20:10.586646 1609109 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1009 23:20:10.587162 1609109 command_runner.go:130] > # stream_tls_key = ""
	I1009 23:20:10.587180 1609109 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1009 23:20:10.587189 1609109 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1009 23:20:10.587201 1609109 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1009 23:20:10.587701 1609109 command_runner.go:130] > # stream_tls_ca = ""
	I1009 23:20:10.587719 1609109 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1009 23:20:10.588334 1609109 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1009 23:20:10.588352 1609109 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1009 23:20:10.588981 1609109 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1009 23:20:10.589006 1609109 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1009 23:20:10.589014 1609109 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1009 23:20:10.589022 1609109 command_runner.go:130] > [crio.runtime]
	I1009 23:20:10.589032 1609109 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1009 23:20:10.589041 1609109 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1009 23:20:10.589047 1609109 command_runner.go:130] > # "nofile=1024:2048"
	I1009 23:20:10.589058 1609109 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1009 23:20:10.589412 1609109 command_runner.go:130] > # default_ulimits = [
	I1009 23:20:10.589761 1609109 command_runner.go:130] > # ]
	I1009 23:20:10.589777 1609109 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1009 23:20:10.590435 1609109 command_runner.go:130] > # no_pivot = false
	I1009 23:20:10.590451 1609109 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1009 23:20:10.590460 1609109 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1009 23:20:10.591080 1609109 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1009 23:20:10.591095 1609109 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1009 23:20:10.591102 1609109 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1009 23:20:10.591111 1609109 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 23:20:10.591614 1609109 command_runner.go:130] > # conmon = ""
	I1009 23:20:10.591629 1609109 command_runner.go:130] > # Cgroup setting for conmon
	I1009 23:20:10.591638 1609109 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1009 23:20:10.591955 1609109 command_runner.go:130] > conmon_cgroup = "pod"
	I1009 23:20:10.591971 1609109 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1009 23:20:10.591978 1609109 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1009 23:20:10.591989 1609109 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1009 23:20:10.592299 1609109 command_runner.go:130] > # conmon_env = [
	I1009 23:20:10.592630 1609109 command_runner.go:130] > # ]
	I1009 23:20:10.592644 1609109 command_runner.go:130] > # Additional environment variables to set for all the
	I1009 23:20:10.592655 1609109 command_runner.go:130] > # containers. These are overridden if set in the
	I1009 23:20:10.592665 1609109 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1009 23:20:10.592976 1609109 command_runner.go:130] > # default_env = [
	I1009 23:20:10.593314 1609109 command_runner.go:130] > # ]
	I1009 23:20:10.593329 1609109 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1009 23:20:10.594007 1609109 command_runner.go:130] > # selinux = false
	I1009 23:20:10.594024 1609109 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1009 23:20:10.594033 1609109 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1009 23:20:10.594040 1609109 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1009 23:20:10.594522 1609109 command_runner.go:130] > # seccomp_profile = ""
	I1009 23:20:10.594538 1609109 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1009 23:20:10.594551 1609109 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1009 23:20:10.594561 1609109 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1009 23:20:10.594569 1609109 command_runner.go:130] > # which might increase security.
	I1009 23:20:10.595240 1609109 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1009 23:20:10.595258 1609109 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1009 23:20:10.595266 1609109 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1009 23:20:10.595275 1609109 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1009 23:20:10.595285 1609109 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1009 23:20:10.595294 1609109 command_runner.go:130] > # This option supports live configuration reload.
	I1009 23:20:10.595982 1609109 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1009 23:20:10.595998 1609109 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1009 23:20:10.596015 1609109 command_runner.go:130] > # the cgroup blockio controller.
	I1009 23:20:10.596499 1609109 command_runner.go:130] > # blockio_config_file = ""
	I1009 23:20:10.596516 1609109 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1009 23:20:10.596522 1609109 command_runner.go:130] > # irqbalance daemon.
	I1009 23:20:10.597160 1609109 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1009 23:20:10.597177 1609109 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1009 23:20:10.597184 1609109 command_runner.go:130] > # This option supports live configuration reload.
	I1009 23:20:10.597689 1609109 command_runner.go:130] > # rdt_config_file = ""
	I1009 23:20:10.597705 1609109 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1009 23:20:10.598025 1609109 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1009 23:20:10.598041 1609109 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1009 23:20:10.598512 1609109 command_runner.go:130] > # separate_pull_cgroup = ""
	I1009 23:20:10.598528 1609109 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1009 23:20:10.598536 1609109 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1009 23:20:10.598541 1609109 command_runner.go:130] > # will be added.
	I1009 23:20:10.598875 1609109 command_runner.go:130] > # default_capabilities = [
	I1009 23:20:10.599392 1609109 command_runner.go:130] > # 	"CHOWN",
	I1009 23:20:10.599759 1609109 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1009 23:20:10.600081 1609109 command_runner.go:130] > # 	"FSETID",
	I1009 23:20:10.600409 1609109 command_runner.go:130] > # 	"FOWNER",
	I1009 23:20:10.600728 1609109 command_runner.go:130] > # 	"SETGID",
	I1009 23:20:10.601049 1609109 command_runner.go:130] > # 	"SETUID",
	I1009 23:20:10.602280 1609109 command_runner.go:130] > # 	"SETPCAP",
	I1009 23:20:10.602298 1609109 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1009 23:20:10.602304 1609109 command_runner.go:130] > # 	"KILL",
	I1009 23:20:10.602308 1609109 command_runner.go:130] > # ]
	I1009 23:20:10.602318 1609109 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1009 23:20:10.602327 1609109 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1009 23:20:10.602336 1609109 command_runner.go:130] > # add_inheritable_capabilities = true
	I1009 23:20:10.602348 1609109 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1009 23:20:10.602358 1609109 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 23:20:10.602368 1609109 command_runner.go:130] > # default_sysctls = [
	I1009 23:20:10.602373 1609109 command_runner.go:130] > # ]
	I1009 23:20:10.602379 1609109 command_runner.go:130] > # List of devices on the host that a
	I1009 23:20:10.602392 1609109 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1009 23:20:10.602398 1609109 command_runner.go:130] > # allowed_devices = [
	I1009 23:20:10.602406 1609109 command_runner.go:130] > # 	"/dev/fuse",
	I1009 23:20:10.602410 1609109 command_runner.go:130] > # ]
	I1009 23:20:10.602417 1609109 command_runner.go:130] > # List of additional devices. specified as
	I1009 23:20:10.602434 1609109 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1009 23:20:10.602445 1609109 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1009 23:20:10.602453 1609109 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1009 23:20:10.602461 1609109 command_runner.go:130] > # additional_devices = [
	I1009 23:20:10.602466 1609109 command_runner.go:130] > # ]
	I1009 23:20:10.602472 1609109 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1009 23:20:10.602481 1609109 command_runner.go:130] > # cdi_spec_dirs = [
	I1009 23:20:10.602486 1609109 command_runner.go:130] > # 	"/etc/cdi",
	I1009 23:20:10.602492 1609109 command_runner.go:130] > # 	"/var/run/cdi",
	I1009 23:20:10.602500 1609109 command_runner.go:130] > # ]
	I1009 23:20:10.602508 1609109 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1009 23:20:10.602516 1609109 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1009 23:20:10.602523 1609109 command_runner.go:130] > # Defaults to false.
	I1009 23:20:10.602530 1609109 command_runner.go:130] > # device_ownership_from_security_context = false
	I1009 23:20:10.602543 1609109 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1009 23:20:10.602551 1609109 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1009 23:20:10.602559 1609109 command_runner.go:130] > # hooks_dir = [
	I1009 23:20:10.602565 1609109 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1009 23:20:10.602570 1609109 command_runner.go:130] > # ]
	I1009 23:20:10.602581 1609109 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1009 23:20:10.602590 1609109 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1009 23:20:10.602600 1609109 command_runner.go:130] > # its default mounts from the following two files:
	I1009 23:20:10.602604 1609109 command_runner.go:130] > #
	I1009 23:20:10.602612 1609109 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1009 23:20:10.602622 1609109 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1009 23:20:10.602632 1609109 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1009 23:20:10.602636 1609109 command_runner.go:130] > #
	I1009 23:20:10.602644 1609109 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1009 23:20:10.602655 1609109 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1009 23:20:10.602664 1609109 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1009 23:20:10.602675 1609109 command_runner.go:130] > #      only add mounts it finds in this file.
	I1009 23:20:10.602680 1609109 command_runner.go:130] > #
	I1009 23:20:10.602692 1609109 command_runner.go:130] > # default_mounts_file = ""
	I1009 23:20:10.602699 1609109 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1009 23:20:10.602708 1609109 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1009 23:20:10.602714 1609109 command_runner.go:130] > # pids_limit = 0
	I1009 23:20:10.602723 1609109 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1009 23:20:10.602733 1609109 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1009 23:20:10.602742 1609109 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1009 23:20:10.602755 1609109 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1009 23:20:10.602761 1609109 command_runner.go:130] > # log_size_max = -1
	I1009 23:20:10.602772 1609109 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1009 23:20:10.602778 1609109 command_runner.go:130] > # log_to_journald = false
	I1009 23:20:10.602786 1609109 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1009 23:20:10.602795 1609109 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1009 23:20:10.602802 1609109 command_runner.go:130] > # Path to directory for container attach sockets.
	I1009 23:20:10.602811 1609109 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1009 23:20:10.602818 1609109 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1009 23:20:10.602845 1609109 command_runner.go:130] > # bind_mount_prefix = ""
	I1009 23:20:10.602858 1609109 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1009 23:20:10.602864 1609109 command_runner.go:130] > # read_only = false
	I1009 23:20:10.602872 1609109 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1009 23:20:10.602883 1609109 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1009 23:20:10.602888 1609109 command_runner.go:130] > # live configuration reload.
	I1009 23:20:10.602895 1609109 command_runner.go:130] > # log_level = "info"
	I1009 23:20:10.602907 1609109 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1009 23:20:10.602914 1609109 command_runner.go:130] > # This option supports live configuration reload.
	I1009 23:20:10.602922 1609109 command_runner.go:130] > # log_filter = ""
	I1009 23:20:10.602931 1609109 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1009 23:20:10.602942 1609109 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1009 23:20:10.602947 1609109 command_runner.go:130] > # separated by comma.
	I1009 23:20:10.602953 1609109 command_runner.go:130] > # uid_mappings = ""
	I1009 23:20:10.602960 1609109 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1009 23:20:10.602973 1609109 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1009 23:20:10.602979 1609109 command_runner.go:130] > # separated by comma.
	I1009 23:20:10.602988 1609109 command_runner.go:130] > # gid_mappings = ""
	I1009 23:20:10.602996 1609109 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1009 23:20:10.603009 1609109 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 23:20:10.603017 1609109 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 23:20:10.603026 1609109 command_runner.go:130] > # minimum_mappable_uid = -1
	I1009 23:20:10.603033 1609109 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1009 23:20:10.603041 1609109 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1009 23:20:10.603048 1609109 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1009 23:20:10.603056 1609109 command_runner.go:130] > # minimum_mappable_gid = -1
	I1009 23:20:10.603065 1609109 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1009 23:20:10.603075 1609109 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1009 23:20:10.603083 1609109 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1009 23:20:10.603092 1609109 command_runner.go:130] > # ctr_stop_timeout = 30
	I1009 23:20:10.603100 1609109 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1009 23:20:10.603110 1609109 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1009 23:20:10.603130 1609109 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1009 23:20:10.603138 1609109 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1009 23:20:10.603151 1609109 command_runner.go:130] > # drop_infra_ctr = true
	I1009 23:20:10.603159 1609109 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1009 23:20:10.603169 1609109 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1009 23:20:10.603179 1609109 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1009 23:20:10.603187 1609109 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1009 23:20:10.603195 1609109 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1009 23:20:10.603204 1609109 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1009 23:20:10.603210 1609109 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1009 23:20:10.603219 1609109 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1009 23:20:10.603224 1609109 command_runner.go:130] > # pinns_path = ""
	I1009 23:20:10.603234 1609109 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1009 23:20:10.603246 1609109 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1009 23:20:10.603254 1609109 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1009 23:20:10.603263 1609109 command_runner.go:130] > # default_runtime = "runc"
	I1009 23:20:10.603269 1609109 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1009 23:20:10.603282 1609109 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1009 23:20:10.603294 1609109 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1009 23:20:10.603304 1609109 command_runner.go:130] > # creation as a file is not desired either.
	I1009 23:20:10.603314 1609109 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1009 23:20:10.603323 1609109 command_runner.go:130] > # the hostname is being managed dynamically.
	I1009 23:20:10.603328 1609109 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1009 23:20:10.603337 1609109 command_runner.go:130] > # ]
	I1009 23:20:10.603345 1609109 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1009 23:20:10.603357 1609109 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1009 23:20:10.603366 1609109 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1009 23:20:10.603376 1609109 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1009 23:20:10.603381 1609109 command_runner.go:130] > #
	I1009 23:20:10.603387 1609109 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1009 23:20:10.603393 1609109 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1009 23:20:10.603400 1609109 command_runner.go:130] > #  runtime_type = "oci"
	I1009 23:20:10.603406 1609109 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1009 23:20:10.603415 1609109 command_runner.go:130] > #  privileged_without_host_devices = false
	I1009 23:20:10.603421 1609109 command_runner.go:130] > #  allowed_annotations = []
	I1009 23:20:10.603429 1609109 command_runner.go:130] > # Where:
	I1009 23:20:10.603436 1609109 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1009 23:20:10.603447 1609109 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1009 23:20:10.603455 1609109 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1009 23:20:10.603466 1609109 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1009 23:20:10.603470 1609109 command_runner.go:130] > #   in $PATH.
	I1009 23:20:10.603494 1609109 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1009 23:20:10.603505 1609109 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1009 23:20:10.603513 1609109 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1009 23:20:10.603521 1609109 command_runner.go:130] > #   state.
	I1009 23:20:10.603529 1609109 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1009 23:20:10.603541 1609109 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1009 23:20:10.603549 1609109 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1009 23:20:10.603557 1609109 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1009 23:20:10.603565 1609109 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1009 23:20:10.603574 1609109 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1009 23:20:10.603583 1609109 command_runner.go:130] > #   The currently recognized values are:
	I1009 23:20:10.603591 1609109 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1009 23:20:10.603603 1609109 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1009 23:20:10.603611 1609109 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1009 23:20:10.603621 1609109 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1009 23:20:10.603631 1609109 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1009 23:20:10.603642 1609109 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1009 23:20:10.603649 1609109 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1009 23:20:10.603659 1609109 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1009 23:20:10.603667 1609109 command_runner.go:130] > #   should be moved to the container's cgroup
	I1009 23:20:10.603677 1609109 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1009 23:20:10.603683 1609109 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1009 23:20:10.603689 1609109 command_runner.go:130] > runtime_type = "oci"
	I1009 23:20:10.603698 1609109 command_runner.go:130] > runtime_root = "/run/runc"
	I1009 23:20:10.603704 1609109 command_runner.go:130] > runtime_config_path = ""
	I1009 23:20:10.603709 1609109 command_runner.go:130] > monitor_path = ""
	I1009 23:20:10.603718 1609109 command_runner.go:130] > monitor_cgroup = ""
	I1009 23:20:10.603723 1609109 command_runner.go:130] > monitor_exec_cgroup = ""
	I1009 23:20:10.603758 1609109 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1009 23:20:10.603767 1609109 command_runner.go:130] > # running containers
	I1009 23:20:10.603773 1609109 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1009 23:20:10.603781 1609109 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1009 23:20:10.603792 1609109 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1009 23:20:10.603800 1609109 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1009 23:20:10.603809 1609109 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1009 23:20:10.603815 1609109 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1009 23:20:10.603821 1609109 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1009 23:20:10.603827 1609109 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1009 23:20:10.603836 1609109 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1009 23:20:10.603842 1609109 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1009 23:20:10.603853 1609109 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1009 23:20:10.603862 1609109 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1009 23:20:10.603872 1609109 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1009 23:20:10.603882 1609109 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1009 23:20:10.603895 1609109 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1009 23:20:10.603902 1609109 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1009 23:20:10.603913 1609109 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1009 23:20:10.603925 1609109 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1009 23:20:10.603936 1609109 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1009 23:20:10.603946 1609109 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1009 23:20:10.603953 1609109 command_runner.go:130] > # Example:
	I1009 23:20:10.603959 1609109 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1009 23:20:10.603971 1609109 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1009 23:20:10.603978 1609109 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1009 23:20:10.603989 1609109 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1009 23:20:10.603994 1609109 command_runner.go:130] > # cpuset = 0
	I1009 23:20:10.603999 1609109 command_runner.go:130] > # cpushares = "0-1"
	I1009 23:20:10.604004 1609109 command_runner.go:130] > # Where:
	I1009 23:20:10.604010 1609109 command_runner.go:130] > # The workload name is workload-type.
	I1009 23:20:10.604021 1609109 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1009 23:20:10.604032 1609109 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1009 23:20:10.604040 1609109 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1009 23:20:10.604053 1609109 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1009 23:20:10.604064 1609109 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1009 23:20:10.604069 1609109 command_runner.go:130] > # 
	I1009 23:20:10.604083 1609109 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1009 23:20:10.604087 1609109 command_runner.go:130] > #
	I1009 23:20:10.604095 1609109 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1009 23:20:10.604102 1609109 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1009 23:20:10.604131 1609109 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1009 23:20:10.604144 1609109 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1009 23:20:10.604152 1609109 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1009 23:20:10.604159 1609109 command_runner.go:130] > [crio.image]
	I1009 23:20:10.604167 1609109 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1009 23:20:10.604173 1609109 command_runner.go:130] > # default_transport = "docker://"
	I1009 23:20:10.604181 1609109 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1009 23:20:10.604189 1609109 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1009 23:20:10.604199 1609109 command_runner.go:130] > # global_auth_file = ""
	I1009 23:20:10.604206 1609109 command_runner.go:130] > # The image used to instantiate infra containers.
	I1009 23:20:10.604217 1609109 command_runner.go:130] > # This option supports live configuration reload.
	I1009 23:20:10.604223 1609109 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1009 23:20:10.604235 1609109 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1009 23:20:10.604243 1609109 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1009 23:20:10.604253 1609109 command_runner.go:130] > # This option supports live configuration reload.
	I1009 23:20:10.604258 1609109 command_runner.go:130] > # pause_image_auth_file = ""
	I1009 23:20:10.604266 1609109 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1009 23:20:10.604273 1609109 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1009 23:20:10.604283 1609109 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1009 23:20:10.604295 1609109 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1009 23:20:10.604301 1609109 command_runner.go:130] > # pause_command = "/pause"
	I1009 23:20:10.604313 1609109 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1009 23:20:10.604322 1609109 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1009 23:20:10.604332 1609109 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1009 23:20:10.604340 1609109 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1009 23:20:10.604347 1609109 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1009 23:20:10.604352 1609109 command_runner.go:130] > # signature_policy = ""
	I1009 23:20:10.604362 1609109 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1009 23:20:10.604374 1609109 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1009 23:20:10.604379 1609109 command_runner.go:130] > # changing them here.
	I1009 23:20:10.604387 1609109 command_runner.go:130] > # insecure_registries = [
	I1009 23:20:10.604392 1609109 command_runner.go:130] > # ]
	I1009 23:20:10.604400 1609109 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1009 23:20:10.604410 1609109 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1009 23:20:10.604415 1609109 command_runner.go:130] > # image_volumes = "mkdir"
	I1009 23:20:10.604422 1609109 command_runner.go:130] > # Temporary directory to use for storing big files
	I1009 23:20:10.604427 1609109 command_runner.go:130] > # big_files_temporary_dir = ""
	I1009 23:20:10.604435 1609109 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1009 23:20:10.604440 1609109 command_runner.go:130] > # CNI plugins.
	I1009 23:20:10.604447 1609109 command_runner.go:130] > [crio.network]
	I1009 23:20:10.604455 1609109 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1009 23:20:10.604465 1609109 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1009 23:20:10.604471 1609109 command_runner.go:130] > # cni_default_network = ""
	I1009 23:20:10.604481 1609109 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1009 23:20:10.604487 1609109 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1009 23:20:10.604497 1609109 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1009 23:20:10.604503 1609109 command_runner.go:130] > # plugin_dirs = [
	I1009 23:20:10.604513 1609109 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1009 23:20:10.604517 1609109 command_runner.go:130] > # ]
	I1009 23:20:10.604525 1609109 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1009 23:20:10.604530 1609109 command_runner.go:130] > [crio.metrics]
	I1009 23:20:10.604536 1609109 command_runner.go:130] > # Globally enable or disable metrics support.
	I1009 23:20:10.604543 1609109 command_runner.go:130] > # enable_metrics = false
	I1009 23:20:10.604550 1609109 command_runner.go:130] > # Specify enabled metrics collectors.
	I1009 23:20:10.604558 1609109 command_runner.go:130] > # Per default all metrics are enabled.
	I1009 23:20:10.604566 1609109 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1009 23:20:10.604577 1609109 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1009 23:20:10.604586 1609109 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1009 23:20:10.604594 1609109 command_runner.go:130] > # metrics_collectors = [
	I1009 23:20:10.604599 1609109 command_runner.go:130] > # 	"operations",
	I1009 23:20:10.604605 1609109 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1009 23:20:10.604611 1609109 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1009 23:20:10.604618 1609109 command_runner.go:130] > # 	"operations_errors",
	I1009 23:20:10.604624 1609109 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1009 23:20:10.604632 1609109 command_runner.go:130] > # 	"image_pulls_by_name",
	I1009 23:20:10.604638 1609109 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1009 23:20:10.604647 1609109 command_runner.go:130] > # 	"image_pulls_failures",
	I1009 23:20:10.604653 1609109 command_runner.go:130] > # 	"image_pulls_successes",
	I1009 23:20:10.604664 1609109 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1009 23:20:10.604669 1609109 command_runner.go:130] > # 	"image_layer_reuse",
	I1009 23:20:10.604678 1609109 command_runner.go:130] > # 	"containers_oom_total",
	I1009 23:20:10.604683 1609109 command_runner.go:130] > # 	"containers_oom",
	I1009 23:20:10.604688 1609109 command_runner.go:130] > # 	"processes_defunct",
	I1009 23:20:10.604693 1609109 command_runner.go:130] > # 	"operations_total",
	I1009 23:20:10.604699 1609109 command_runner.go:130] > # 	"operations_latency_seconds",
	I1009 23:20:10.604707 1609109 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1009 23:20:10.604713 1609109 command_runner.go:130] > # 	"operations_errors_total",
	I1009 23:20:10.604721 1609109 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1009 23:20:10.604727 1609109 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1009 23:20:10.604733 1609109 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1009 23:20:10.604741 1609109 command_runner.go:130] > # 	"image_pulls_success_total",
	I1009 23:20:10.604749 1609109 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1009 23:20:10.604758 1609109 command_runner.go:130] > # 	"containers_oom_count_total",
	I1009 23:20:10.604762 1609109 command_runner.go:130] > # ]
	I1009 23:20:10.604769 1609109 command_runner.go:130] > # The port on which the metrics server will listen.
	I1009 23:20:10.604774 1609109 command_runner.go:130] > # metrics_port = 9090
	I1009 23:20:10.604783 1609109 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1009 23:20:10.604789 1609109 command_runner.go:130] > # metrics_socket = ""
	I1009 23:20:10.604798 1609109 command_runner.go:130] > # The certificate for the secure metrics server.
	I1009 23:20:10.604807 1609109 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1009 23:20:10.604817 1609109 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1009 23:20:10.604843 1609109 command_runner.go:130] > # certificate on any modification event.
	I1009 23:20:10.604850 1609109 command_runner.go:130] > # metrics_cert = ""
	I1009 23:20:10.604858 1609109 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1009 23:20:10.604864 1609109 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1009 23:20:10.604869 1609109 command_runner.go:130] > # metrics_key = ""
	I1009 23:20:10.604882 1609109 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1009 23:20:10.604887 1609109 command_runner.go:130] > [crio.tracing]
	I1009 23:20:10.604897 1609109 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1009 23:20:10.604903 1609109 command_runner.go:130] > # enable_tracing = false
	I1009 23:20:10.604913 1609109 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1009 23:20:10.604919 1609109 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1009 23:20:10.604928 1609109 command_runner.go:130] > # Number of samples to collect per million spans.
	I1009 23:20:10.604934 1609109 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1009 23:20:10.604942 1609109 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1009 23:20:10.604947 1609109 command_runner.go:130] > [crio.stats]
	I1009 23:20:10.604954 1609109 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1009 23:20:10.604964 1609109 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1009 23:20:10.604970 1609109 command_runner.go:130] > # stats_collection_period = 0
	I1009 23:20:10.606735 1609109 command_runner.go:130] ! time="2023-10-09 23:20:10.573702027Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1009 23:20:10.606761 1609109 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1009 23:20:10.607106 1609109 cni.go:84] Creating CNI manager for ""
	I1009 23:20:10.607136 1609109 cni.go:136] 2 nodes found, recommending kindnet
	I1009 23:20:10.607146 1609109 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1009 23:20:10.607166 1609109 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-717678 NodeName:multinode-717678-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 23:20:10.607290 1609109 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-717678-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 23:20:10.607348 1609109 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-717678-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-717678 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1009 23:20:10.607411 1609109 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1009 23:20:10.618222 1609109 command_runner.go:130] > kubeadm
	I1009 23:20:10.618243 1609109 command_runner.go:130] > kubectl
	I1009 23:20:10.618249 1609109 command_runner.go:130] > kubelet
	I1009 23:20:10.619758 1609109 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 23:20:10.619831 1609109 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1009 23:20:10.631334 1609109 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1009 23:20:10.654454 1609109 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 23:20:10.678766 1609109 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1009 23:20:10.683474 1609109 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 23:20:10.697763 1609109 host.go:66] Checking if "multinode-717678" exists ...
	I1009 23:20:10.698039 1609109 start.go:304] JoinCluster: &{Name:multinode-717678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-717678 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 23:20:10.698165 1609109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1009 23:20:10.698214 1609109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678
	I1009 23:20:10.698640 1609109 config.go:182] Loaded profile config "multinode-717678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1009 23:20:10.722016 1609109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34434 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678/id_rsa Username:docker}
	I1009 23:20:10.892245 1609109 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token a04k2z.je0vv00zquo992xx --discovery-token-ca-cert-hash sha256:e2aebf53348f507bad0adab8a765b229b70810954e22f1e7a919941009267e3f 
	I1009 23:20:10.892311 1609109 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1009 23:20:10.892354 1609109 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token a04k2z.je0vv00zquo992xx --discovery-token-ca-cert-hash sha256:e2aebf53348f507bad0adab8a765b229b70810954e22f1e7a919941009267e3f --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-717678-m02"
	I1009 23:20:10.937556 1609109 command_runner.go:130] > [preflight] Running pre-flight checks
	I1009 23:20:10.985252 1609109 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1009 23:20:10.985279 1609109 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1047-aws
	I1009 23:20:10.985286 1609109 command_runner.go:130] > OS: Linux
	I1009 23:20:10.985292 1609109 command_runner.go:130] > CGROUPS_CPU: enabled
	I1009 23:20:10.985306 1609109 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1009 23:20:10.985314 1609109 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1009 23:20:10.985323 1609109 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1009 23:20:10.985330 1609109 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1009 23:20:10.985340 1609109 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1009 23:20:10.985353 1609109 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1009 23:20:10.985363 1609109 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1009 23:20:10.985369 1609109 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1009 23:20:11.105040 1609109 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1009 23:20:11.106118 1609109 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1009 23:20:11.139902 1609109 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 23:20:11.140226 1609109 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 23:20:11.140243 1609109 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1009 23:20:11.241531 1609109 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1009 23:20:14.759253 1609109 command_runner.go:130] > This node has joined the cluster:
	I1009 23:20:14.759275 1609109 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1009 23:20:14.759283 1609109 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1009 23:20:14.759292 1609109 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1009 23:20:14.762574 1609109 command_runner.go:130] ! W1009 23:20:10.937044    1003 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1009 23:20:14.762604 1609109 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1009 23:20:14.762618 1609109 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 23:20:14.762630 1609109 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token a04k2z.je0vv00zquo992xx --discovery-token-ca-cert-hash sha256:e2aebf53348f507bad0adab8a765b229b70810954e22f1e7a919941009267e3f --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-717678-m02": (3.870260067s)
	I1009 23:20:14.762645 1609109 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1009 23:20:14.979414 1609109 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1009 23:20:14.979605 1609109 start.go:306] JoinCluster complete in 4.281562308s
	I1009 23:20:14.979622 1609109 cni.go:84] Creating CNI manager for ""
	I1009 23:20:14.979629 1609109 cni.go:136] 2 nodes found, recommending kindnet
	I1009 23:20:14.979709 1609109 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 23:20:14.984677 1609109 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1009 23:20:14.984704 1609109 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1009 23:20:14.984713 1609109 command_runner.go:130] > Device: 3ah/58d	Inode: 1308851     Links: 1
	I1009 23:20:14.984722 1609109 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1009 23:20:14.984732 1609109 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1009 23:20:14.984739 1609109 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1009 23:20:14.984745 1609109 command_runner.go:130] > Change: 2023-10-09 22:55:10.333391806 +0000
	I1009 23:20:14.984752 1609109 command_runner.go:130] >  Birth: 2023-10-09 22:55:10.293391681 +0000
	I1009 23:20:14.984802 1609109 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1009 23:20:14.984815 1609109 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1009 23:20:15.040394 1609109 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 23:20:15.394490 1609109 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1009 23:20:15.399669 1609109 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1009 23:20:15.403411 1609109 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1009 23:20:15.416941 1609109 command_runner.go:130] > daemonset.apps/kindnet configured
	I1009 23:20:15.422800 1609109 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17375-1537865/kubeconfig
	I1009 23:20:15.423066 1609109 kapi.go:59] client config for multinode-717678: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/client.crt", KeyFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/client.key", CAFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b67c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 23:20:15.423426 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1009 23:20:15.423443 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:15.423452 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:15.423463 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:15.426029 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:15.426053 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:15.426062 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:15.426069 1609109 round_trippers.go:580]     Content-Length: 291
	I1009 23:20:15.426076 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:15 GMT
	I1009 23:20:15.426082 1609109 round_trippers.go:580]     Audit-Id: c6e1b954-8b37-4127-8742-39d75177a31a
	I1009 23:20:15.426098 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:15.426105 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:15.426111 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:15.426139 1609109 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a040aa83-288a-42e7-9e24-15b47b6337a4","resourceVersion":"418","creationTimestamp":"2023-10-09T23:19:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1009 23:20:15.426231 1609109 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-717678" context rescaled to 1 replicas
	I1009 23:20:15.426261 1609109 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1009 23:20:15.431036 1609109 out.go:177] * Verifying Kubernetes components...
	I1009 23:20:15.433189 1609109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 23:20:15.447432 1609109 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17375-1537865/kubeconfig
	I1009 23:20:15.447730 1609109 kapi.go:59] client config for multinode-717678: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/client.crt", KeyFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/multinode-717678/client.key", CAFile:"/home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16b67c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1009 23:20:15.447997 1609109 node_ready.go:35] waiting up to 6m0s for node "multinode-717678-m02" to be "Ready" ...
	I1009 23:20:15.448064 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678-m02
	I1009 23:20:15.448074 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:15.448083 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:15.448090 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:15.450600 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:15.450624 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:15.450632 1609109 round_trippers.go:580]     Audit-Id: d782d19b-9479-4f08-8a74-3a7c88aca7e8
	I1009 23:20:15.450639 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:15.450646 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:15.450652 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:15.450659 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:15.450669 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:15 GMT
	I1009 23:20:15.450949 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678-m02","uid":"b105885e-7a7b-4363-b281-aaf0d995fc24","resourceVersion":"457","creationTimestamp":"2023-10-09T23:20:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:20:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:20:1
4Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1009 23:20:15.451389 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678-m02
	I1009 23:20:15.451407 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:15.451416 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:15.451423 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:15.453804 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:15.453852 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:15.453872 1609109 round_trippers.go:580]     Audit-Id: 9e043ca2-8138-4bd9-b0d6-70de700bdd96
	I1009 23:20:15.453894 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:15.453925 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:15.453948 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:15.453960 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:15.453966 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:15 GMT
	I1009 23:20:15.454079 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678-m02","uid":"b105885e-7a7b-4363-b281-aaf0d995fc24","resourceVersion":"457","creationTimestamp":"2023-10-09T23:20:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:20:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:20:1
4Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1009 23:20:15.954710 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678-m02
	I1009 23:20:15.954734 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:15.954745 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:15.954752 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:15.957463 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:15.957487 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:15.957496 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:15.957503 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:15 GMT
	I1009 23:20:15.957509 1609109 round_trippers.go:580]     Audit-Id: cff6f7e8-48a9-4c3c-96ee-b631324be878
	I1009 23:20:15.957517 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:15.957523 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:15.957530 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:15.957671 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678-m02","uid":"b105885e-7a7b-4363-b281-aaf0d995fc24","resourceVersion":"457","creationTimestamp":"2023-10-09T23:20:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:20:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:20:1
4Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1009 23:20:16.455350 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678-m02
	I1009 23:20:16.455376 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:16.455386 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:16.455394 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:16.458029 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:16.458059 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:16.458068 1609109 round_trippers.go:580]     Audit-Id: 37985c4c-47bc-4c89-9e12-38b36e073002
	I1009 23:20:16.458075 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:16.458082 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:16.458088 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:16.458095 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:16.458102 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:16 GMT
	I1009 23:20:16.458194 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678-m02","uid":"b105885e-7a7b-4363-b281-aaf0d995fc24","resourceVersion":"457","creationTimestamp":"2023-10-09T23:20:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:20:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:20:1
4Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I1009 23:20:16.955160 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678-m02
	I1009 23:20:16.955228 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:16.955251 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:16.955272 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:16.961343 1609109 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1009 23:20:16.961365 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:16.961373 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:16.961380 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:16.961387 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:16 GMT
	I1009 23:20:16.961393 1609109 round_trippers.go:580]     Audit-Id: ddd46cea-8d78-4b0a-bdbc-e0800be2e241
	I1009 23:20:16.961399 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:16.961405 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:16.962147 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678-m02","uid":"b105885e-7a7b-4363-b281-aaf0d995fc24","resourceVersion":"474","creationTimestamp":"2023-10-09T23:20:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:20:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:20:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I1009 23:20:16.962521 1609109 node_ready.go:49] node "multinode-717678-m02" has status "Ready":"True"
	I1009 23:20:16.962533 1609109 node_ready.go:38] duration metric: took 1.514517761s waiting for node "multinode-717678-m02" to be "Ready" ...
	I1009 23:20:16.962542 1609109 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 23:20:16.962605 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1009 23:20:16.962612 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:16.962621 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:16.962627 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:16.967209 1609109 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1009 23:20:16.967239 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:16.967249 1609109 round_trippers.go:580]     Audit-Id: c97a5d11-bcbe-40ff-b198-204975187498
	I1009 23:20:16.967256 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:16.967263 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:16.967269 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:16.967276 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:16.967283 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:16 GMT
	I1009 23:20:16.967725 1609109 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"474"},"items":[{"metadata":{"name":"coredns-5dd5756b68-zz9n9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"319f2e3b-8eb5-4d49-bfa6-f7add29b87fd","resourceVersion":"414","creationTimestamp":"2023-10-09T23:19:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2a4f696e-345b-4d7d-9e3b-7f5b62b3a01c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:19:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4f696e-345b-4d7d-9e3b-7f5b62b3a01c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I1009 23:20:16.970599 1609109 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-zz9n9" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:16.970691 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-zz9n9
	I1009 23:20:16.970704 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:16.970713 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:16.970722 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:16.973292 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:16.973317 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:16.973325 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:16.973331 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:16.973338 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:16.973349 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:16.973356 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:16 GMT
	I1009 23:20:16.973362 1609109 round_trippers.go:580]     Audit-Id: 6a7f2820-068d-4eba-947f-94ad55ee49d8
	I1009 23:20:16.973459 1609109 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-zz9n9","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"319f2e3b-8eb5-4d49-bfa6-f7add29b87fd","resourceVersion":"414","creationTimestamp":"2023-10-09T23:19:25Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"2a4f696e-345b-4d7d-9e3b-7f5b62b3a01c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:19:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4f696e-345b-4d7d-9e3b-7f5b62b3a01c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1009 23:20:16.974018 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:20:16.974035 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:16.974043 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:16.974050 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:16.976293 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:16.976316 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:16.976323 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:16.976330 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:16.976337 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:16 GMT
	I1009 23:20:16.976343 1609109 round_trippers.go:580]     Audit-Id: 6cdffa31-9491-47c8-944b-ff76955de02b
	I1009 23:20:16.976350 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:16.976358 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:16.976536 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"398","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1009 23:20:16.976946 1609109 pod_ready.go:92] pod "coredns-5dd5756b68-zz9n9" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:16.976965 1609109 pod_ready.go:81] duration metric: took 6.335979ms waiting for pod "coredns-5dd5756b68-zz9n9" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:16.976975 1609109 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-717678" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:16.977046 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-717678
	I1009 23:20:16.977071 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:16.977089 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:16.977103 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:16.979507 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:16.979530 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:16.979546 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:16.979553 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:16.979561 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:16.979570 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:16.979577 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:16 GMT
	I1009 23:20:16.979587 1609109 round_trippers.go:580]     Audit-Id: 07bf6c80-1f28-46f7-ac17-572099315514
	I1009 23:20:16.979698 1609109 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-717678","namespace":"kube-system","uid":"05c1fa65-d9c1-4a32-b59a-8fb73083f98f","resourceVersion":"386","creationTimestamp":"2023-10-09T23:19:11Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"77edffb2dcd9b3fb05b40164ea3d4c0e","kubernetes.io/config.mirror":"77edffb2dcd9b3fb05b40164ea3d4c0e","kubernetes.io/config.seen":"2023-10-09T23:19:11.222865162Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:19:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1009 23:20:16.980155 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:20:16.980171 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:16.980180 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:16.980187 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:16.982327 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:16.982380 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:16.982402 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:16.982421 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:16.982451 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:16.982473 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:16 GMT
	I1009 23:20:16.982488 1609109 round_trippers.go:580]     Audit-Id: 368e0431-08cd-4d47-935a-0edf5a62c37c
	I1009 23:20:16.982494 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:16.982624 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"398","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1009 23:20:16.983036 1609109 pod_ready.go:92] pod "etcd-multinode-717678" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:16.983055 1609109 pod_ready.go:81] duration metric: took 6.060925ms waiting for pod "etcd-multinode-717678" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:16.983072 1609109 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-717678" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:16.983153 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-717678
	I1009 23:20:16.983163 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:16.983171 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:16.983178 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:16.986241 1609109 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1009 23:20:16.986305 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:16.986339 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:16.986362 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:16.986388 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:16.986422 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:16 GMT
	I1009 23:20:16.986444 1609109 round_trippers.go:580]     Audit-Id: 3efd46f7-af4e-4396-bd64-e93d5e32ca4b
	I1009 23:20:16.986462 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:16.986603 1609109 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-717678","namespace":"kube-system","uid":"ab6577f1-1934-4fc4-bc32-83c7646ea4ce","resourceVersion":"387","creationTimestamp":"2023-10-09T23:19:11Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"e6de9bb12ea3b896eb151fe0950fa9cf","kubernetes.io/config.mirror":"e6de9bb12ea3b896eb151fe0950fa9cf","kubernetes.io/config.seen":"2023-10-09T23:19:11.222871866Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:19:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1009 23:20:16.987228 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:20:16.987244 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:16.987253 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:16.987260 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:16.990178 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:16.990200 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:16.990209 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:16.990215 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:16 GMT
	I1009 23:20:16.990222 1609109 round_trippers.go:580]     Audit-Id: 76498d68-76f4-4f4a-879d-5f646939a5a4
	I1009 23:20:16.990228 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:16.990234 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:16.990240 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:16.990357 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"398","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1009 23:20:16.990742 1609109 pod_ready.go:92] pod "kube-apiserver-multinode-717678" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:16.990757 1609109 pod_ready.go:81] duration metric: took 7.677826ms waiting for pod "kube-apiserver-multinode-717678" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:16.990775 1609109 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-717678" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:16.990839 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-717678
	I1009 23:20:16.990847 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:16.990855 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:16.990862 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:16.993256 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:16.993277 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:16.993285 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:16 GMT
	I1009 23:20:16.993292 1609109 round_trippers.go:580]     Audit-Id: eb2d73ab-3476-4b49-8612-ab1406ba5cf4
	I1009 23:20:16.993298 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:16.993304 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:16.993314 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:16.993320 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:16.993461 1609109 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-717678","namespace":"kube-system","uid":"0ab0571d-b106-409a-a094-39501a8718a1","resourceVersion":"388","creationTimestamp":"2023-10-09T23:19:11Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"66e7ef2312034a4e7eda456783a5901a","kubernetes.io/config.mirror":"66e7ef2312034a4e7eda456783a5901a","kubernetes.io/config.seen":"2023-10-09T23:19:11.222873548Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:19:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1009 23:20:16.993973 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:20:16.993990 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:16.993999 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:16.994005 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:16.996254 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:16.996276 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:16.996284 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:16.996290 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:16.996296 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:16.996329 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:16.996343 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:16 GMT
	I1009 23:20:16.996349 1609109 round_trippers.go:580]     Audit-Id: f968e6e2-0be3-4db9-826e-05229241b2f3
	I1009 23:20:16.996656 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"398","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1009 23:20:16.997069 1609109 pod_ready.go:92] pod "kube-controller-manager-multinode-717678" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:16.997088 1609109 pod_ready.go:81] duration metric: took 6.302338ms waiting for pod "kube-controller-manager-multinode-717678" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:16.997102 1609109 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8zh7z" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:17.155456 1609109 request.go:629] Waited for 158.288559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8zh7z
	I1009 23:20:17.155579 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8zh7z
	I1009 23:20:17.155591 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:17.155600 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:17.155608 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:17.158172 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:17.158198 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:17.158206 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:17.158212 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:17.158226 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:17 GMT
	I1009 23:20:17.158237 1609109 round_trippers.go:580]     Audit-Id: c1fb1df0-a996-4df0-b9fa-619282b0d481
	I1009 23:20:17.158243 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:17.158253 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:17.158359 1609109 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8zh7z","generateName":"kube-proxy-","namespace":"kube-system","uid":"26420832-f9b9-4c98-b7c0-8b3f9d15b4aa","resourceVersion":"379","creationTimestamp":"2023-10-09T23:19:25Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cf961607-8da1-41e8-a9f8-f66778682cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:19:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cf961607-8da1-41e8-a9f8-f66778682cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I1009 23:20:17.356104 1609109 request.go:629] Waited for 197.233165ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:20:17.356226 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:20:17.356235 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:17.356245 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:17.356253 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:17.358949 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:17.359010 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:17.359037 1609109 round_trippers.go:580]     Audit-Id: d700d487-5a3e-41e0-8998-94a78d096c2d
	I1009 23:20:17.359057 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:17.359090 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:17.359113 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:17.359154 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:17.359185 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:17 GMT
	I1009 23:20:17.359379 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"398","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1009 23:20:17.359841 1609109 pod_ready.go:92] pod "kube-proxy-8zh7z" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:17.359860 1609109 pod_ready.go:81] duration metric: took 362.749119ms waiting for pod "kube-proxy-8zh7z" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:17.359874 1609109 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vrv88" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:17.555213 1609109 request.go:629] Waited for 195.270977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vrv88
	I1009 23:20:17.555284 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vrv88
	I1009 23:20:17.555295 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:17.555321 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:17.555332 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:17.557945 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:17.558033 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:17.558049 1609109 round_trippers.go:580]     Audit-Id: 2ec77664-51cf-4e75-a448-45a3a89954f0
	I1009 23:20:17.558056 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:17.558063 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:17.558088 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:17.558098 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:17.558105 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:17 GMT
	I1009 23:20:17.558234 1609109 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vrv88","generateName":"kube-proxy-","namespace":"kube-system","uid":"db58a7de-0fcb-4262-b931-96142cbeaa6c","resourceVersion":"470","creationTimestamp":"2023-10-09T23:20:14Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cf961607-8da1-41e8-a9f8-f66778682cbb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:20:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cf961607-8da1-41e8-a9f8-f66778682cbb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I1009 23:20:17.756099 1609109 request.go:629] Waited for 197.374007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-717678-m02
	I1009 23:20:17.756177 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678-m02
	I1009 23:20:17.756187 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:17.756196 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:17.756203 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:17.758718 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:17.758744 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:17.758753 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:17.758760 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:17.758766 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:17.758773 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:17 GMT
	I1009 23:20:17.758779 1609109 round_trippers.go:580]     Audit-Id: e7e2f626-8bc1-4548-9a8c-796ce4dac4f7
	I1009 23:20:17.758786 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:17.759014 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678-m02","uid":"b105885e-7a7b-4363-b281-aaf0d995fc24","resourceVersion":"474","creationTimestamp":"2023-10-09T23:20:14Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:20:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:20:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I1009 23:20:17.759432 1609109 pod_ready.go:92] pod "kube-proxy-vrv88" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:17.759452 1609109 pod_ready.go:81] duration metric: took 399.564505ms waiting for pod "kube-proxy-vrv88" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:17.759464 1609109 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-717678" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:17.955903 1609109 request.go:629] Waited for 196.342464ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-717678
	I1009 23:20:17.956001 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-717678
	I1009 23:20:17.956016 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:17.956027 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:17.956051 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:17.958759 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:17.958784 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:17.958793 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:17 GMT
	I1009 23:20:17.958799 1609109 round_trippers.go:580]     Audit-Id: fb5798a7-a880-4a29-8ee0-7d2b45a40cb8
	I1009 23:20:17.958806 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:17.958813 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:17.958819 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:17.958825 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:17.959058 1609109 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-717678","namespace":"kube-system","uid":"1efa97e5-8ca4-4dee-9657-510e82694828","resourceVersion":"385","creationTimestamp":"2023-10-09T23:19:11Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0177a77cb732655e3ea7b32da15d984a","kubernetes.io/config.mirror":"0177a77cb732655e3ea7b32da15d984a","kubernetes.io/config.seen":"2023-10-09T23:19:11.222874639Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-10-09T23:19:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1009 23:20:18.155911 1609109 request.go:629] Waited for 196.346025ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:20:18.155984 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-717678
	I1009 23:20:18.155993 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:18.156003 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:18.156014 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:18.158885 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:18.158913 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:18.158922 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:18.158929 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:18.158937 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:18 GMT
	I1009 23:20:18.158943 1609109 round_trippers.go:580]     Audit-Id: 3959bc2a-d754-4835-a8eb-1ae5d2a35c3d
	I1009 23:20:18.158950 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:18.158956 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:18.159070 1609109 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"398","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-10-09T23:19:08Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1009 23:20:18.159534 1609109 pod_ready.go:92] pod "kube-scheduler-multinode-717678" in "kube-system" namespace has status "Ready":"True"
	I1009 23:20:18.159560 1609109 pod_ready.go:81] duration metric: took 400.07969ms waiting for pod "kube-scheduler-multinode-717678" in "kube-system" namespace to be "Ready" ...
	I1009 23:20:18.159575 1609109 pod_ready.go:38] duration metric: took 1.197023816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 23:20:18.159598 1609109 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 23:20:18.159664 1609109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 23:20:18.174859 1609109 system_svc.go:56] duration metric: took 15.242223ms WaitForService to wait for kubelet.
	I1009 23:20:18.174891 1609109 kubeadm.go:581] duration metric: took 2.7486027s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1009 23:20:18.174914 1609109 node_conditions.go:102] verifying NodePressure condition ...
	I1009 23:20:18.355253 1609109 request.go:629] Waited for 180.254577ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1009 23:20:18.355330 1609109 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1009 23:20:18.355341 1609109 round_trippers.go:469] Request Headers:
	I1009 23:20:18.355400 1609109 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1009 23:20:18.355412 1609109 round_trippers.go:473]     Accept: application/json, */*
	I1009 23:20:18.358222 1609109 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1009 23:20:18.358245 1609109 round_trippers.go:577] Response Headers:
	I1009 23:20:18.358254 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 41a35ec0-01de-4669-bbfb-c538e5048dcd
	I1009 23:20:18.358261 1609109 round_trippers.go:580]     Date: Mon, 09 Oct 2023 23:20:18 GMT
	I1009 23:20:18.358267 1609109 round_trippers.go:580]     Audit-Id: 6f0c2975-e6fd-45f0-bbf1-e10288721c91
	I1009 23:20:18.358278 1609109 round_trippers.go:580]     Cache-Control: no-cache, private
	I1009 23:20:18.358287 1609109 round_trippers.go:580]     Content-Type: application/json
	I1009 23:20:18.358293 1609109 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 7d62719a-ec99-494e-bc08-db3c4a2cbba0
	I1009 23:20:18.358480 1609109 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"475"},"items":[{"metadata":{"name":"multinode-717678","uid":"874e5ae6-ff43-40a8-8089-c062ee5a5cbc","resourceVersion":"398","creationTimestamp":"2023-10-09T23:19:08Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-717678","kubernetes.io/os":"linux","minikube.k8s.io/commit":"1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90","minikube.k8s.io/name":"multinode-717678","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_10_09T23_19_12_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12452 chars]
	I1009 23:20:18.359113 1609109 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 23:20:18.359159 1609109 node_conditions.go:123] node cpu capacity is 2
	I1009 23:20:18.359170 1609109 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1009 23:20:18.359174 1609109 node_conditions.go:123] node cpu capacity is 2
	I1009 23:20:18.359179 1609109 node_conditions.go:105] duration metric: took 184.260153ms to run NodePressure ...
	I1009 23:20:18.359191 1609109 start.go:228] waiting for startup goroutines ...
	I1009 23:20:18.359215 1609109 start.go:242] writing updated cluster config ...
	I1009 23:20:18.359527 1609109 ssh_runner.go:195] Run: rm -f paused
	I1009 23:20:18.420785 1609109 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I1009 23:20:18.425379 1609109 out.go:177] * Done! kubectl is now configured to use "multinode-717678" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Oct 09 23:19:57 multinode-717678 crio[903]: time="2023-10-09 23:19:57.796021508Z" level=info msg="Starting container: 902c8a394618670330aa8ddc76045da312b76cacb5dd6b76030aa4cb0936ebca" id=94dce27c-1848-45f9-a378-aed2c30fae6c name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 23:19:57 multinode-717678 crio[903]: time="2023-10-09 23:19:57.808964589Z" level=info msg="Started container" PID=1934 containerID=902c8a394618670330aa8ddc76045da312b76cacb5dd6b76030aa4cb0936ebca description=kube-system/storage-provisioner/storage-provisioner id=94dce27c-1848-45f9-a378-aed2c30fae6c name=/runtime.v1.RuntimeService/StartContainer sandboxID=7dc25fe9e8ea9d6a7b5520801db94521d8ee89ebdfe7e998e0777c2aeedcc3b8
	Oct 09 23:19:57 multinode-717678 crio[903]: time="2023-10-09 23:19:57.830968573Z" level=info msg="Created container 2dc831b025a26bb5cbf1926d5702ecc62949111e3ab32afa25a24836a2d9dbef: kube-system/coredns-5dd5756b68-zz9n9/coredns" id=b769c730-b466-4ad7-913f-0dc2481da40d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 23:19:57 multinode-717678 crio[903]: time="2023-10-09 23:19:57.831666298Z" level=info msg="Starting container: 2dc831b025a26bb5cbf1926d5702ecc62949111e3ab32afa25a24836a2d9dbef" id=3038f07e-2e4e-4357-97d6-6cfe3bd63380 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 23:19:57 multinode-717678 crio[903]: time="2023-10-09 23:19:57.851424473Z" level=info msg="Started container" PID=1954 containerID=2dc831b025a26bb5cbf1926d5702ecc62949111e3ab32afa25a24836a2d9dbef description=kube-system/coredns-5dd5756b68-zz9n9/coredns id=3038f07e-2e4e-4357-97d6-6cfe3bd63380 name=/runtime.v1.RuntimeService/StartContainer sandboxID=eb1f2b4a2a4f8005bfd6f4374082b4821905860f9fff4e0258c5b046f08fe24c
	Oct 09 23:20:20 multinode-717678 crio[903]: time="2023-10-09 23:20:20.596774639Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-2rmqx/POD" id=6a047247-655b-4f3b-90e7-e2fbef4451a2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 23:20:20 multinode-717678 crio[903]: time="2023-10-09 23:20:20.596845269Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 09 23:20:20 multinode-717678 crio[903]: time="2023-10-09 23:20:20.615957444Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-2rmqx Namespace:default ID:560bd1921687e2adeb17d33ccfb9dd3d4aab6b3ece63e96140fe1efb5006a19b UID:bd5d5264-b136-4526-9e03-070fcd80f6d6 NetNS:/var/run/netns/b342d5b6-95f4-4469-9b47-3eaf4f53d2e4 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 09 23:20:20 multinode-717678 crio[903]: time="2023-10-09 23:20:20.615996566Z" level=info msg="Adding pod default_busybox-5bc68d56bd-2rmqx to CNI network \"kindnet\" (type=ptp)"
	Oct 09 23:20:20 multinode-717678 crio[903]: time="2023-10-09 23:20:20.626416940Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-2rmqx Namespace:default ID:560bd1921687e2adeb17d33ccfb9dd3d4aab6b3ece63e96140fe1efb5006a19b UID:bd5d5264-b136-4526-9e03-070fcd80f6d6 NetNS:/var/run/netns/b342d5b6-95f4-4469-9b47-3eaf4f53d2e4 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 09 23:20:20 multinode-717678 crio[903]: time="2023-10-09 23:20:20.626565551Z" level=info msg="Checking pod default_busybox-5bc68d56bd-2rmqx for CNI network kindnet (type=ptp)"
	Oct 09 23:20:20 multinode-717678 crio[903]: time="2023-10-09 23:20:20.646018610Z" level=info msg="Ran pod sandbox 560bd1921687e2adeb17d33ccfb9dd3d4aab6b3ece63e96140fe1efb5006a19b with infra container: default/busybox-5bc68d56bd-2rmqx/POD" id=6a047247-655b-4f3b-90e7-e2fbef4451a2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 23:20:20 multinode-717678 crio[903]: time="2023-10-09 23:20:20.647659501Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=79efcfef-d368-4504-939f-bbd706429e70 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 23:20:20 multinode-717678 crio[903]: time="2023-10-09 23:20:20.647910759Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=79efcfef-d368-4504-939f-bbd706429e70 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 23:20:20 multinode-717678 crio[903]: time="2023-10-09 23:20:20.649160946Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=c2d1d2e6-aedb-40f9-9695-9d46b3c76b57 name=/runtime.v1.ImageService/PullImage
	Oct 09 23:20:20 multinode-717678 crio[903]: time="2023-10-09 23:20:20.650329803Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 09 23:20:21 multinode-717678 crio[903]: time="2023-10-09 23:20:21.306610690Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Oct 09 23:20:22 multinode-717678 crio[903]: time="2023-10-09 23:20:22.650904609Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=c2d1d2e6-aedb-40f9-9695-9d46b3c76b57 name=/runtime.v1.ImageService/PullImage
	Oct 09 23:20:22 multinode-717678 crio[903]: time="2023-10-09 23:20:22.652218616Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=546180e5-3e1a-40da-880b-642e4028053c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 23:20:22 multinode-717678 crio[903]: time="2023-10-09 23:20:22.652989793Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=546180e5-3e1a-40da-880b-642e4028053c name=/runtime.v1.ImageService/ImageStatus
	Oct 09 23:20:22 multinode-717678 crio[903]: time="2023-10-09 23:20:22.654422594Z" level=info msg="Creating container: default/busybox-5bc68d56bd-2rmqx/busybox" id=5c5cc5b2-761e-43fa-b28b-6ae7ff6ff973 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 23:20:22 multinode-717678 crio[903]: time="2023-10-09 23:20:22.654528596Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 09 23:20:22 multinode-717678 crio[903]: time="2023-10-09 23:20:22.732019976Z" level=info msg="Created container b2c19a56b8a02268a52129210eab45918bf37cbce7916286b9e506cd171f6630: default/busybox-5bc68d56bd-2rmqx/busybox" id=5c5cc5b2-761e-43fa-b28b-6ae7ff6ff973 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 23:20:22 multinode-717678 crio[903]: time="2023-10-09 23:20:22.732821439Z" level=info msg="Starting container: b2c19a56b8a02268a52129210eab45918bf37cbce7916286b9e506cd171f6630" id=5ed77663-cb5a-4dd2-a23c-b1758cf366b5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 23:20:22 multinode-717678 crio[903]: time="2023-10-09 23:20:22.745151127Z" level=info msg="Started container" PID=2082 containerID=b2c19a56b8a02268a52129210eab45918bf37cbce7916286b9e506cd171f6630 description=default/busybox-5bc68d56bd-2rmqx/busybox id=5ed77663-cb5a-4dd2-a23c-b1758cf366b5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=560bd1921687e2adeb17d33ccfb9dd3d4aab6b3ece63e96140fe1efb5006a19b
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b2c19a56b8a02       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   560bd1921687e       busybox-5bc68d56bd-2rmqx
	2dc831b025a26       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      30 seconds ago       Running             coredns                   0                   eb1f2b4a2a4f8       coredns-5dd5756b68-zz9n9
	902c8a3946186       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      30 seconds ago       Running             storage-provisioner       0                   7dc25fe9e8ea9       storage-provisioner
	dd6a59d335ceb       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa                                      About a minute ago   Running             kube-proxy                0                   31feffed80fd6       kube-proxy-8zh7z
	bf5012625cad6       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      About a minute ago   Running             kindnet-cni               0                   aef48b5286cf1       kindnet-mr6j6
	631be2dbe50c8       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c                                      About a minute ago   Running             kube-controller-manager   0                   940313dd96fa9       kube-controller-manager-multinode-717678
	154d5780dfb94       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7                                      About a minute ago   Running             kube-scheduler            0                   23f48ea63dbe8       kube-scheduler-multinode-717678
	add32a9afc223       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c                                      About a minute ago   Running             kube-apiserver            0                   5be86536278de       kube-apiserver-multinode-717678
	61bbd98243dee       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   3c423916d040e       etcd-multinode-717678
	
	* 
	* ==> coredns [2dc831b025a26bb5cbf1926d5702ecc62949111e3ab32afa25a24836a2d9dbef] <==
	* [INFO] 10.244.0.3:51107 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000111254s
	[INFO] 10.244.1.2:34948 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142696s
	[INFO] 10.244.1.2:55306 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001659804s
	[INFO] 10.244.1.2:36474 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000145133s
	[INFO] 10.244.1.2:46465 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097289s
	[INFO] 10.244.1.2:32890 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001346416s
	[INFO] 10.244.1.2:59972 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128288s
	[INFO] 10.244.1.2:37053 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000121404s
	[INFO] 10.244.1.2:46209 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008265s
	[INFO] 10.244.0.3:55539 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103524s
	[INFO] 10.244.0.3:46367 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000063311s
	[INFO] 10.244.0.3:46250 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000057674s
	[INFO] 10.244.0.3:41386 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000056608s
	[INFO] 10.244.1.2:48394 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000103319s
	[INFO] 10.244.1.2:52018 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112813s
	[INFO] 10.244.1.2:38218 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078458s
	[INFO] 10.244.1.2:37787 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072886s
	[INFO] 10.244.0.3:43032 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112894s
	[INFO] 10.244.0.3:39499 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140751s
	[INFO] 10.244.0.3:49016 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010926s
	[INFO] 10.244.0.3:56048 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000140087s
	[INFO] 10.244.1.2:58057 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177518s
	[INFO] 10.244.1.2:33008 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000084472s
	[INFO] 10.244.1.2:51267 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000084185s
	[INFO] 10.244.1.2:57253 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000076636s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-717678
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-717678
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90
	                    minikube.k8s.io/name=multinode-717678
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_09T23_19_12_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Oct 2023 23:19:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-717678
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Oct 2023 23:20:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Oct 2023 23:19:57 +0000   Mon, 09 Oct 2023 23:19:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Oct 2023 23:19:57 +0000   Mon, 09 Oct 2023 23:19:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Oct 2023 23:19:57 +0000   Mon, 09 Oct 2023 23:19:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Oct 2023 23:19:57 +0000   Mon, 09 Oct 2023 23:19:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-717678
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 5bcd36adb14549ca940627a205cc76d9
	  System UUID:                5d1695ff-699a-4d89-a7c9-cf13f3216c05
	  Boot ID:                    049a78d9-9f92-4a07-bf20-80a1aba53693
	  Kernel Version:             5.15.0-1047-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-2rmqx                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 coredns-5dd5756b68-zz9n9                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     63s
	  kube-system                 etcd-multinode-717678                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         77s
	  kube-system                 kindnet-mr6j6                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      63s
	  kube-system                 kube-apiserver-multinode-717678             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-controller-manager-multinode-717678    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kube-proxy-8zh7z                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	  kube-system                 kube-scheduler-multinode-717678             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 61s   kube-proxy       
	  Normal  Starting                 77s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  77s   kubelet          Node multinode-717678 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s   kubelet          Node multinode-717678 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s   kubelet          Node multinode-717678 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           64s   node-controller  Node multinode-717678 event: Registered Node multinode-717678 in Controller
	  Normal  NodeReady                31s   kubelet          Node multinode-717678 status is now: NodeReady
	
	
	Name:               multinode-717678-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-717678-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 09 Oct 2023 23:20:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-717678-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 09 Oct 2023 23:20:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 09 Oct 2023 23:20:16 +0000   Mon, 09 Oct 2023 23:20:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 09 Oct 2023 23:20:16 +0000   Mon, 09 Oct 2023 23:20:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 09 Oct 2023 23:20:16 +0000   Mon, 09 Oct 2023 23:20:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 09 Oct 2023 23:20:16 +0000   Mon, 09 Oct 2023 23:20:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-717678-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 2f8182c881a24a4faf4375331d88f62a
	  System UUID:                58d9cfa6-5867-446f-ba36-d83613003ee9
	  Boot ID:                    049a78d9-9f92-4a07-bf20-80a1aba53693
	  Kernel Version:             5.15.0-1047-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-5q5k2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-hst6q               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      14s
	  kube-system                 kube-proxy-vrv88            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12s                kube-proxy       
	  Normal  RegisteredNode           14s                node-controller  Node multinode-717678-m02 event: Registered Node multinode-717678-m02 in Controller
	  Normal  NodeHasSufficientMemory  14s (x5 over 16s)  kubelet          Node multinode-717678-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14s (x5 over 16s)  kubelet          Node multinode-717678-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14s (x5 over 16s)  kubelet          Node multinode-717678-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12s                kubelet          Node multinode-717678-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001188] FS-Cache: O-key=[8] 'ed75ed0000000000'
	[  +0.000841] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000972] FS-Cache: N-cookie d=00000000cbdaf303{9p.inode} n=00000000f000a426
	[  +0.001097] FS-Cache: N-key=[8] 'ed75ed0000000000'
	[  +0.002696] FS-Cache: Duplicate cookie detected
	[  +0.000758] FS-Cache: O-cookie c=00000060 [p=0000005d fl=226 nc=0 na=1]
	[  +0.000976] FS-Cache: O-cookie d=00000000cbdaf303{9p.inode} n=00000000e2aea24f
	[  +0.001120] FS-Cache: O-key=[8] 'ed75ed0000000000'
	[  +0.000757] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001016] FS-Cache: N-cookie d=00000000cbdaf303{9p.inode} n=000000000a2d4cb7
	[  +0.001098] FS-Cache: N-key=[8] 'ed75ed0000000000'
	[  +2.824684] FS-Cache: Duplicate cookie detected
	[  +0.000733] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001053] FS-Cache: O-cookie d=00000000cbdaf303{9p.inode} n=0000000017dc1af4
	[  +0.001081] FS-Cache: O-key=[8] 'ec75ed0000000000'
	[  +0.000718] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000968] FS-Cache: N-cookie d=00000000cbdaf303{9p.inode} n=00000000f000a426
	[  +0.001070] FS-Cache: N-key=[8] 'ec75ed0000000000'
	[  +0.325722] FS-Cache: Duplicate cookie detected
	[  +0.000761] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.000956] FS-Cache: O-cookie d=00000000cbdaf303{9p.inode} n=00000000142f07e9
	[  +0.001082] FS-Cache: O-key=[8] 'f375ed0000000000'
	[  +0.000768] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.000938] FS-Cache: N-cookie d=00000000cbdaf303{9p.inode} n=000000009f4943e1
	[  +0.001027] FS-Cache: N-key=[8] 'f375ed0000000000'
	
	* 
	* ==> etcd [61bbd98243dee4647bd7163e2308e0b1f114e86d52cf7e9ac161baa11f39cf70] <==
	* {"level":"info","ts":"2023-10-09T23:19:04.357581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-10-09T23:19:04.357725Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-10-09T23:19:04.359412Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-10-09T23:19:04.359597Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-09T23:19:04.359724Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-10-09T23:19:04.36014Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-10-09T23:19:04.360379Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-10-09T23:19:05.343173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-10-09T23:19:05.343289Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-10-09T23:19:05.343356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-10-09T23:19:05.343396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-10-09T23:19:05.343434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-09T23:19:05.343477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-10-09T23:19:05.343511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-10-09T23:19:05.347367Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-717678 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-10-09T23:19:05.347571Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-09T23:19:05.348603Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-10-09T23:19:05.351234Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-10-09T23:19:05.360018Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-10-09T23:19:05.38322Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-09T23:19:05.391152Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-10-09T23:19:05.391248Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-10-09T23:19:05.391314Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-09T23:19:05.391419Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-10-09T23:19:05.391474Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  23:20:28 up  7:02,  0 users,  load average: 1.25, 1.99, 1.84
	Linux multinode-717678 5.15.0-1047-aws #52~20.04.1-Ubuntu SMP Thu Sep 21 10:08:54 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [bf5012625cad63c180a9e3bc028cacacb2e30dcf39e62b0458819ed313d0d999] <==
	* I1009 23:19:26.716166       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1009 23:19:26.716236       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I1009 23:19:26.716359       1 main.go:116] setting mtu 1500 for CNI 
	I1009 23:19:26.716368       1 main.go:146] kindnetd IP family: "ipv4"
	I1009 23:19:26.716381       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1009 23:19:56.951475       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1009 23:19:56.965597       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1009 23:19:56.965628       1 main.go:227] handling current node
	I1009 23:20:06.975794       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1009 23:20:06.975825       1 main.go:227] handling current node
	I1009 23:20:16.988573       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1009 23:20:16.988604       1 main.go:227] handling current node
	I1009 23:20:16.988616       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1009 23:20:16.988662       1 main.go:250] Node multinode-717678-m02 has CIDR [10.244.1.0/24] 
	I1009 23:20:16.988833       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1009 23:20:26.993025       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1009 23:20:26.993145       1 main.go:227] handling current node
	I1009 23:20:26.993182       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1009 23:20:26.993225       1 main.go:250] Node multinode-717678-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [add32a9afc223ef304a25ca6002379e287af1483a1ce3fd46f27f867a41a7735] <==
	* I1009 23:19:08.284261       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1009 23:19:08.284266       1 cache.go:39] Caches are synced for autoregister controller
	I1009 23:19:08.295256       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1009 23:19:08.310221       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1009 23:19:08.320631       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1009 23:19:09.084991       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1009 23:19:09.090973       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1009 23:19:09.091001       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1009 23:19:09.692331       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1009 23:19:09.734773       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1009 23:19:09.799549       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1009 23:19:09.806491       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1009 23:19:09.807697       1 controller.go:624] quota admission added evaluator for: endpoints
	I1009 23:19:09.817966       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1009 23:19:10.199940       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1009 23:19:11.143899       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1009 23:19:11.158414       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1009 23:19:11.171758       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1009 23:19:24.936020       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1009 23:19:25.363570       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E1009 23:20:24.238733       1 upgradeaware.go:439] Error proxying data from backend to client: write tcp 192.168.58.2:8443->192.168.58.1:59790: write: broken pipe
	E1009 23:20:25.156656       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.58.2:59662->192.168.58.3:10250: write: broken pipe
	E1009 23:20:25.419342       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.58.2:44790->192.168.58.2:10250: write: broken pipe
	E1009 23:20:25.699407       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.58.2:59664->192.168.58.3:10250: write: broken pipe
	E1009 23:20:26.407716       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.58.2:44794->192.168.58.2:10250: write: broken pipe
	
	* 
	* ==> kube-controller-manager [631be2dbe50c820ad9afcf163acaab615ebbe39117c3f70f6b39b1e89e5b85c0] <==
	* I1009 23:19:25.901894       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.703µs"
	I1009 23:19:57.354720       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="108.373µs"
	I1009 23:19:57.378665       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.717µs"
	I1009 23:19:58.525107       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.403006ms"
	I1009 23:19:58.526992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="997.189µs"
	I1009 23:19:59.390330       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1009 23:20:14.278451       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-717678-m02\" does not exist"
	I1009 23:20:14.295268       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-717678-m02" podCIDRs=["10.244.1.0/24"]
	I1009 23:20:14.300214       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hst6q"
	I1009 23:20:14.300319       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vrv88"
	I1009 23:20:14.391816       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-717678-m02"
	I1009 23:20:14.392005       1 event.go:307] "Event occurred" object="multinode-717678-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-717678-m02 event: Registered Node multinode-717678-m02 in Controller"
	I1009 23:20:16.952342       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-717678-m02"
	I1009 23:20:19.340570       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1009 23:20:19.358878       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-5q5k2"
	I1009 23:20:19.375667       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-2rmqx"
	I1009 23:20:19.395854       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.021753ms"
	I1009 23:20:19.424914       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="28.933835ms"
	I1009 23:20:19.432386       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-5q5k2" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-5q5k2"
	I1009 23:20:19.449856       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="24.811377ms"
	I1009 23:20:19.450068       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="78.08µs"
	I1009 23:20:21.885242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.507418ms"
	I1009 23:20:21.885312       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="37.243µs"
	I1009 23:20:23.569675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.061935ms"
	I1009 23:20:23.569860       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="48.689µs"
	
	* 
	* ==> kube-proxy [dd6a59d335ceb3534e2db6fd5cb3c1f15054ed0655d022d7b9042753d56fc227] <==
	* I1009 23:19:26.930886       1 server_others.go:69] "Using iptables proxy"
	I1009 23:19:26.946627       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1009 23:19:26.969776       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 23:19:26.972219       1 server_others.go:152] "Using iptables Proxier"
	I1009 23:19:26.972268       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1009 23:19:26.972277       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1009 23:19:26.972368       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1009 23:19:26.972668       1 server.go:846] "Version info" version="v1.28.2"
	I1009 23:19:26.972686       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 23:19:26.974082       1 config.go:188] "Starting service config controller"
	I1009 23:19:26.974102       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1009 23:19:26.974119       1 config.go:97] "Starting endpoint slice config controller"
	I1009 23:19:26.974123       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1009 23:19:26.974581       1 config.go:315] "Starting node config controller"
	I1009 23:19:26.974596       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1009 23:19:27.074923       1 shared_informer.go:318] Caches are synced for node config
	I1009 23:19:27.074962       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1009 23:19:27.075030       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [154d5780dfb94cdfe89feb0e33e15860954d176d23f43a149de251b78b493a55] <==
	* W1009 23:19:08.260131       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 23:19:08.260178       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 23:19:08.260143       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1009 23:19:08.260008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1009 23:19:08.259945       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1009 23:19:08.260297       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1009 23:19:08.260359       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1009 23:19:08.260382       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 23:19:08.260456       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 23:19:08.260308       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1009 23:19:08.260208       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 23:19:08.260580       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1009 23:19:08.260420       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 23:19:08.260661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1009 23:19:08.259666       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1009 23:19:08.260743       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1009 23:19:08.259611       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 23:19:08.260820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1009 23:19:09.168740       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1009 23:19:09.168883       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1009 23:19:09.177621       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1009 23:19:09.177755       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1009 23:19:09.465776       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1009 23:19:09.465989       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1009 23:19:11.651487       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Oct 09 23:19:25 multinode-717678 kubelet[1394]: I1009 23:19:25.620340    1394 topology_manager.go:215] "Topology Admit Handler" podUID="6f90c4c5-a8d7-4d81-85be-abc93edf1b46" podNamespace="kube-system" podName="kindnet-mr6j6"
	Oct 09 23:19:25 multinode-717678 kubelet[1394]: I1009 23:19:25.686852    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f90c4c5-a8d7-4d81-85be-abc93edf1b46-xtables-lock\") pod \"kindnet-mr6j6\" (UID: \"6f90c4c5-a8d7-4d81-85be-abc93edf1b46\") " pod="kube-system/kindnet-mr6j6"
	Oct 09 23:19:25 multinode-717678 kubelet[1394]: I1009 23:19:25.686909    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f90c4c5-a8d7-4d81-85be-abc93edf1b46-lib-modules\") pod \"kindnet-mr6j6\" (UID: \"6f90c4c5-a8d7-4d81-85be-abc93edf1b46\") " pod="kube-system/kindnet-mr6j6"
	Oct 09 23:19:25 multinode-717678 kubelet[1394]: I1009 23:19:25.686933    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8n88\" (UniqueName: \"kubernetes.io/projected/6f90c4c5-a8d7-4d81-85be-abc93edf1b46-kube-api-access-p8n88\") pod \"kindnet-mr6j6\" (UID: \"6f90c4c5-a8d7-4d81-85be-abc93edf1b46\") " pod="kube-system/kindnet-mr6j6"
	Oct 09 23:19:25 multinode-717678 kubelet[1394]: I1009 23:19:25.686956    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6f90c4c5-a8d7-4d81-85be-abc93edf1b46-cni-cfg\") pod \"kindnet-mr6j6\" (UID: \"6f90c4c5-a8d7-4d81-85be-abc93edf1b46\") " pod="kube-system/kindnet-mr6j6"
	Oct 09 23:19:26 multinode-717678 kubelet[1394]: W1009 23:19:26.556033    1394 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/4263e5d8fe6b4225f635cb6100a7248d26a60b28f7521d97b02e4d683d7c37c9/crio-aef48b5286cf14fb5b18960e2631aca23e6950597dbb362d8097d69c68f2a3f1 WatchSource:0}: Error finding container aef48b5286cf14fb5b18960e2631aca23e6950597dbb362d8097d69c68f2a3f1: Status 404 returned error can't find the container with id aef48b5286cf14fb5b18960e2631aca23e6950597dbb362d8097d69c68f2a3f1
	Oct 09 23:19:26 multinode-717678 kubelet[1394]: W1009 23:19:26.779925    1394 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/4263e5d8fe6b4225f635cb6100a7248d26a60b28f7521d97b02e4d683d7c37c9/crio-31feffed80fd64e9f7dfe5de5cc53c4fd7740adb1851eeb8f35259291f5ca094 WatchSource:0}: Error finding container 31feffed80fd64e9f7dfe5de5cc53c4fd7740adb1851eeb8f35259291f5ca094: Status 404 returned error can't find the container with id 31feffed80fd64e9f7dfe5de5cc53c4fd7740adb1851eeb8f35259291f5ca094
	Oct 09 23:19:27 multinode-717678 kubelet[1394]: I1009 23:19:27.457021    1394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8zh7z" podStartSLOduration=2.456977998 podCreationTimestamp="2023-10-09 23:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-09 23:19:27.444684962 +0000 UTC m=+16.336435770" watchObservedRunningTime="2023-10-09 23:19:27.456977998 +0000 UTC m=+16.348728806"
	Oct 09 23:19:27 multinode-717678 kubelet[1394]: I1009 23:19:27.457128    1394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-mr6j6" podStartSLOduration=2.457109814 podCreationTimestamp="2023-10-09 23:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-09 23:19:27.456419523 +0000 UTC m=+16.348170322" watchObservedRunningTime="2023-10-09 23:19:27.457109814 +0000 UTC m=+16.348860638"
	Oct 09 23:19:57 multinode-717678 kubelet[1394]: I1009 23:19:57.317099    1394 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 09 23:19:57 multinode-717678 kubelet[1394]: I1009 23:19:57.348385    1394 topology_manager.go:215] "Topology Admit Handler" podUID="832d43a3-110f-47e7-a82a-e4fbfe107d43" podNamespace="kube-system" podName="storage-provisioner"
	Oct 09 23:19:57 multinode-717678 kubelet[1394]: I1009 23:19:57.354058    1394 topology_manager.go:215] "Topology Admit Handler" podUID="319f2e3b-8eb5-4d49-bfa6-f7add29b87fd" podNamespace="kube-system" podName="coredns-5dd5756b68-zz9n9"
	Oct 09 23:19:57 multinode-717678 kubelet[1394]: I1009 23:19:57.439412    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q225t\" (UniqueName: \"kubernetes.io/projected/832d43a3-110f-47e7-a82a-e4fbfe107d43-kube-api-access-q225t\") pod \"storage-provisioner\" (UID: \"832d43a3-110f-47e7-a82a-e4fbfe107d43\") " pod="kube-system/storage-provisioner"
	Oct 09 23:19:57 multinode-717678 kubelet[1394]: I1009 23:19:57.439472    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/832d43a3-110f-47e7-a82a-e4fbfe107d43-tmp\") pod \"storage-provisioner\" (UID: \"832d43a3-110f-47e7-a82a-e4fbfe107d43\") " pod="kube-system/storage-provisioner"
	Oct 09 23:19:57 multinode-717678 kubelet[1394]: I1009 23:19:57.439498    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rwt8\" (UniqueName: \"kubernetes.io/projected/319f2e3b-8eb5-4d49-bfa6-f7add29b87fd-kube-api-access-5rwt8\") pod \"coredns-5dd5756b68-zz9n9\" (UID: \"319f2e3b-8eb5-4d49-bfa6-f7add29b87fd\") " pod="kube-system/coredns-5dd5756b68-zz9n9"
	Oct 09 23:19:57 multinode-717678 kubelet[1394]: I1009 23:19:57.439526    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/319f2e3b-8eb5-4d49-bfa6-f7add29b87fd-config-volume\") pod \"coredns-5dd5756b68-zz9n9\" (UID: \"319f2e3b-8eb5-4d49-bfa6-f7add29b87fd\") " pod="kube-system/coredns-5dd5756b68-zz9n9"
	Oct 09 23:19:57 multinode-717678 kubelet[1394]: W1009 23:19:57.716212    1394 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/4263e5d8fe6b4225f635cb6100a7248d26a60b28f7521d97b02e4d683d7c37c9/crio-eb1f2b4a2a4f8005bfd6f4374082b4821905860f9fff4e0258c5b046f08fe24c WatchSource:0}: Error finding container eb1f2b4a2a4f8005bfd6f4374082b4821905860f9fff4e0258c5b046f08fe24c: Status 404 returned error can't find the container with id eb1f2b4a2a4f8005bfd6f4374082b4821905860f9fff4e0258c5b046f08fe24c
	Oct 09 23:19:58 multinode-717678 kubelet[1394]: I1009 23:19:58.513686    1394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=33.513641429 podCreationTimestamp="2023-10-09 23:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-09 23:19:58.501191633 +0000 UTC m=+47.392942433" watchObservedRunningTime="2023-10-09 23:19:58.513641429 +0000 UTC m=+47.405392237"
	Oct 09 23:20:19 multinode-717678 kubelet[1394]: I1009 23:20:19.394240    1394 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-zz9n9" podStartSLOduration=54.394173779 podCreationTimestamp="2023-10-09 23:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-09 23:19:58.514892075 +0000 UTC m=+47.406642875" watchObservedRunningTime="2023-10-09 23:20:19.394173779 +0000 UTC m=+68.285924587"
	Oct 09 23:20:19 multinode-717678 kubelet[1394]: I1009 23:20:19.394647    1394 topology_manager.go:215] "Topology Admit Handler" podUID="bd5d5264-b136-4526-9e03-070fcd80f6d6" podNamespace="default" podName="busybox-5bc68d56bd-2rmqx"
	Oct 09 23:20:19 multinode-717678 kubelet[1394]: W1009 23:20:19.399594    1394 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-717678" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-717678' and this object
	Oct 09 23:20:19 multinode-717678 kubelet[1394]: E1009 23:20:19.399632    1394 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-717678" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'multinode-717678' and this object
	Oct 09 23:20:19 multinode-717678 kubelet[1394]: I1009 23:20:19.501412    1394 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drr6g\" (UniqueName: \"kubernetes.io/projected/bd5d5264-b136-4526-9e03-070fcd80f6d6-kube-api-access-drr6g\") pod \"busybox-5bc68d56bd-2rmqx\" (UID: \"bd5d5264-b136-4526-9e03-070fcd80f6d6\") " pod="default/busybox-5bc68d56bd-2rmqx"
	Oct 09 23:20:20 multinode-717678 kubelet[1394]: W1009 23:20:20.646639    1394 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/4263e5d8fe6b4225f635cb6100a7248d26a60b28f7521d97b02e4d683d7c37c9/crio-560bd1921687e2adeb17d33ccfb9dd3d4aab6b3ece63e96140fe1efb5006a19b WatchSource:0}: Error finding container 560bd1921687e2adeb17d33ccfb9dd3d4aab6b3ece63e96140fe1efb5006a19b: Status 404 returned error can't find the container with id 560bd1921687e2adeb17d33ccfb9dd3d4aab6b3ece63e96140fe1efb5006a19b
	Oct 09 23:20:26 multinode-717678 kubelet[1394]: E1009 23:20:26.404913    1394 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46318->127.0.0.1:38605: write tcp 127.0.0.1:46318->127.0.0.1:38605: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-717678 -n multinode-717678
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-717678 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.17s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (78.79s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.1418224784.exe start -p running-upgrade-022173 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.1418224784.exe start -p running-upgrade-022173 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m9.609616143s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-022173 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-022173 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.945193894s)

                                                
                                                
-- stdout --
	* [running-upgrade-022173] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-022173 in cluster running-upgrade-022173
	* Pulling base image ...
	* Updating the running docker "running-upgrade-022173" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 23:40:27.178305 1671851 out.go:296] Setting OutFile to fd 1 ...
	I1009 23:40:27.178576 1671851 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:40:27.178604 1671851 out.go:309] Setting ErrFile to fd 2...
	I1009 23:40:27.178624 1671851 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:40:27.178913 1671851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-1537865/.minikube/bin
	I1009 23:40:27.179351 1671851 out.go:303] Setting JSON to false
	I1009 23:40:27.180462 1671851 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":26571,"bootTime":1696868257,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1009 23:40:27.180563 1671851 start.go:138] virtualization:  
	I1009 23:40:27.183681 1671851 out.go:177] * [running-upgrade-022173] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1009 23:40:27.186407 1671851 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1009 23:40:27.191255 1671851 notify.go:220] Checking for updates...
	I1009 23:40:27.194533 1671851 out.go:177]   - MINIKUBE_LOCATION=17375
	I1009 23:40:27.196420 1671851 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 23:40:27.198405 1671851 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig
	I1009 23:40:27.200407 1671851 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	I1009 23:40:27.202560 1671851 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 23:40:27.204948 1671851 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 23:40:27.207756 1671851 config.go:182] Loaded profile config "running-upgrade-022173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1009 23:40:27.210323 1671851 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1009 23:40:27.212199 1671851 driver.go:378] Setting default libvirt URI to qemu:///system
	I1009 23:40:27.259152 1671851 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1009 23:40:27.259247 1671851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 23:40:27.350759 1671851 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1009 23:40:27.375745 1671851 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:44 SystemTime:2023-10-09 23:40:27.364591708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1009 23:40:27.375853 1671851 docker.go:295] overlay module found
	I1009 23:40:27.379532 1671851 out.go:177] * Using the docker driver based on existing profile
	I1009 23:40:27.381532 1671851 start.go:298] selected driver: docker
	I1009 23:40:27.381550 1671851 start.go:902] validating driver "docker" against &{Name:running-upgrade-022173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-022173 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.158 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1009 23:40:27.381658 1671851 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 23:40:27.382316 1671851 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 23:40:27.459719 1671851 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:44 SystemTime:2023-10-09 23:40:27.444798796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1009 23:40:27.460166 1671851 cni.go:84] Creating CNI manager for ""
	I1009 23:40:27.460189 1671851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 23:40:27.460204 1671851 start_flags.go:323] config:
	{Name:running-upgrade-022173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-022173 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.158 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1009 23:40:27.462824 1671851 out.go:177] * Starting control plane node running-upgrade-022173 in cluster running-upgrade-022173
	I1009 23:40:27.465161 1671851 cache.go:122] Beginning downloading kic base image for docker with crio
	I1009 23:40:27.467608 1671851 out.go:177] * Pulling base image ...
	I1009 23:40:27.470006 1671851 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1009 23:40:27.470196 1671851 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1009 23:40:27.499630 1671851 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1009 23:40:27.499654 1671851 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1009 23:40:27.610658 1671851 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1009 23:40:27.610818 1671851 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/running-upgrade-022173/config.json ...
	I1009 23:40:27.611074 1671851 cache.go:195] Successfully downloaded all kic artifacts
	I1009 23:40:27.611106 1671851 start.go:365] acquiring machines lock for running-upgrade-022173: {Name:mk5d737b35980a8837399d9770dafb90e0373bce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:40:27.611191 1671851 start.go:369] acquired machines lock for "running-upgrade-022173" in 33.354µs
	I1009 23:40:27.611209 1671851 start.go:96] Skipping create...Using existing machine configuration
	I1009 23:40:27.611216 1671851 fix.go:54] fixHost starting: 
	I1009 23:40:27.611482 1671851 cli_runner.go:164] Run: docker container inspect running-upgrade-022173 --format={{.State.Status}}
	I1009 23:40:27.611734 1671851 cache.go:107] acquiring lock: {Name:mk547194f29dea59c964eb78b3cd8e0df5ea5528 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:40:27.611801 1671851 cache.go:115] /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1009 23:40:27.611814 1671851 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 86.335µs
	I1009 23:40:27.612116 1671851 cache.go:107] acquiring lock: {Name:mke13f05d30ce0fb18987deeb13e5c57f2f12df8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:40:27.612185 1671851 cache.go:115] /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1009 23:40:27.612197 1671851 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 85.99µs
	I1009 23:40:27.612423 1671851 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1009 23:40:27.612466 1671851 cache.go:107] acquiring lock: {Name:mkb84087b9344d3ebb38a0b9c0a1052ff06b25bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:40:27.612519 1671851 cache.go:115] /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1009 23:40:27.612526 1671851 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 63.253µs
	I1009 23:40:27.612534 1671851 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1009 23:40:27.612544 1671851 cache.go:107] acquiring lock: {Name:mk7f72d4bbe8fdd9109bcd737dfac688966b3c64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:40:27.612582 1671851 cache.go:115] /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1009 23:40:27.612591 1671851 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 48.739µs
	I1009 23:40:27.612599 1671851 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1009 23:40:27.612611 1671851 cache.go:107] acquiring lock: {Name:mkc50cf68a41acb8ed077d688832a34dd3175437 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:40:27.612640 1671851 cache.go:115] /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1009 23:40:27.612649 1671851 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 41.584µs
	I1009 23:40:27.612655 1671851 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1009 23:40:27.612666 1671851 cache.go:107] acquiring lock: {Name:mk8415099262f4fae61bef3a512e73836de3726f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:40:27.612696 1671851 cache.go:115] /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1009 23:40:27.612704 1671851 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 38.999µs
	I1009 23:40:27.612711 1671851 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1009 23:40:27.612720 1671851 cache.go:107] acquiring lock: {Name:mkdc4aa1196ced396ac0e67ae116ccdb9bcc1929 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:40:27.612754 1671851 cache.go:115] /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1009 23:40:27.612763 1671851 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 43.864µs
	I1009 23:40:27.612770 1671851 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1009 23:40:27.612784 1671851 cache.go:107] acquiring lock: {Name:mkc53020ba3f290825df8d311190ba6aa55b0d24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:40:27.612815 1671851 cache.go:115] /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1009 23:40:27.612825 1671851 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 41.345µs
	I1009 23:40:27.612831 1671851 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1009 23:40:27.612846 1671851 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1009 23:40:27.612862 1671851 cache.go:87] Successfully saved all images to host disk.
	I1009 23:40:27.632136 1671851 fix.go:102] recreateIfNeeded on running-upgrade-022173: state=Running err=<nil>
	W1009 23:40:27.632166 1671851 fix.go:128] unexpected machine state, will restart: <nil>
	I1009 23:40:27.634666 1671851 out.go:177] * Updating the running docker "running-upgrade-022173" container ...
	I1009 23:40:27.637063 1671851 machine.go:88] provisioning docker machine ...
	I1009 23:40:27.637099 1671851 ubuntu.go:169] provisioning hostname "running-upgrade-022173"
	I1009 23:40:27.637170 1671851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-022173
	I1009 23:40:27.662356 1671851 main.go:141] libmachine: Using SSH client type: native
	I1009 23:40:27.662853 1671851 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34537 <nil> <nil>}
	I1009 23:40:27.662877 1671851 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-022173 && echo "running-upgrade-022173" | sudo tee /etc/hostname
	I1009 23:40:27.816846 1671851 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-022173
	
	I1009 23:40:27.816950 1671851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-022173
	I1009 23:40:27.837853 1671851 main.go:141] libmachine: Using SSH client type: native
	I1009 23:40:27.838271 1671851 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34537 <nil> <nil>}
	I1009 23:40:27.838301 1671851 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-022173' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-022173/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-022173' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 23:40:27.984518 1671851 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 23:40:27.984542 1671851 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17375-1537865/.minikube CaCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17375-1537865/.minikube}
	I1009 23:40:27.984580 1671851 ubuntu.go:177] setting up certificates
	I1009 23:40:27.984589 1671851 provision.go:83] configureAuth start
	I1009 23:40:27.984652 1671851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-022173
	I1009 23:40:28.013657 1671851 provision.go:138] copyHostCerts
	I1009 23:40:28.013753 1671851 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem, removing ...
	I1009 23:40:28.013762 1671851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem
	I1009 23:40:28.013849 1671851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem (1078 bytes)
	I1009 23:40:28.013963 1671851 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem, removing ...
	I1009 23:40:28.013969 1671851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem
	I1009 23:40:28.013997 1671851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem (1123 bytes)
	I1009 23:40:28.014062 1671851 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem, removing ...
	I1009 23:40:28.014073 1671851 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem
	I1009 23:40:28.014108 1671851 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem (1679 bytes)
	I1009 23:40:28.014171 1671851 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-022173 san=[192.168.70.158 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-022173]
	I1009 23:40:28.691796 1671851 provision.go:172] copyRemoteCerts
	I1009 23:40:28.691916 1671851 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 23:40:28.691987 1671851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-022173
	I1009 23:40:28.710893 1671851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/running-upgrade-022173/id_rsa Username:docker}
	I1009 23:40:28.808400 1671851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 23:40:28.832133 1671851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1009 23:40:28.857200 1671851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 23:40:28.882162 1671851 provision.go:86] duration metric: configureAuth took 897.544281ms
	I1009 23:40:28.882189 1671851 ubuntu.go:193] setting minikube options for container-runtime
	I1009 23:40:28.882374 1671851 config.go:182] Loaded profile config "running-upgrade-022173": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1009 23:40:28.882484 1671851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-022173
	I1009 23:40:28.901543 1671851 main.go:141] libmachine: Using SSH client type: native
	I1009 23:40:28.901976 1671851 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34537 <nil> <nil>}
	I1009 23:40:28.901997 1671851 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 23:40:29.493904 1671851 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 23:40:29.493940 1671851 machine.go:91] provisioned docker machine in 1.856858735s
	I1009 23:40:29.493951 1671851 start.go:300] post-start starting for "running-upgrade-022173" (driver="docker")
	I1009 23:40:29.493971 1671851 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 23:40:29.494051 1671851 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 23:40:29.494095 1671851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-022173
	I1009 23:40:29.516644 1671851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/running-upgrade-022173/id_rsa Username:docker}
	I1009 23:40:29.620703 1671851 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 23:40:29.624788 1671851 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1009 23:40:29.624818 1671851 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 23:40:29.624830 1671851 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1009 23:40:29.624837 1671851 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1009 23:40:29.624848 1671851 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-1537865/.minikube/addons for local assets ...
	I1009 23:40:29.624901 1671851 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-1537865/.minikube/files for local assets ...
	I1009 23:40:29.624991 1671851 filesync.go:149] local asset: /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem -> 15432152.pem in /etc/ssl/certs
	I1009 23:40:29.625092 1671851 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 23:40:29.633950 1671851 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem --> /etc/ssl/certs/15432152.pem (1708 bytes)
	I1009 23:40:29.658147 1671851 start.go:303] post-start completed in 164.180433ms
	I1009 23:40:29.658237 1671851 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 23:40:29.658283 1671851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-022173
	I1009 23:40:29.677230 1671851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/running-upgrade-022173/id_rsa Username:docker}
	I1009 23:40:29.773200 1671851 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 23:40:29.779201 1671851 fix.go:56] fixHost completed within 2.167976575s
	I1009 23:40:29.779225 1671851 start.go:83] releasing machines lock for "running-upgrade-022173", held for 2.168020776s
	I1009 23:40:29.779315 1671851 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-022173
	I1009 23:40:29.797856 1671851 ssh_runner.go:195] Run: cat /version.json
	I1009 23:40:29.797913 1671851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-022173
	I1009 23:40:29.798063 1671851 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 23:40:29.798132 1671851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-022173
	I1009 23:40:29.819280 1671851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/running-upgrade-022173/id_rsa Username:docker}
	I1009 23:40:29.821810 1671851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34537 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/running-upgrade-022173/id_rsa Username:docker}
	W1009 23:40:30.030796 1671851 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1009 23:40:30.030891 1671851 ssh_runner.go:195] Run: systemctl --version
	I1009 23:40:30.037975 1671851 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 23:40:30.260424 1671851 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 23:40:30.269588 1671851 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 23:40:30.300617 1671851 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1009 23:40:30.300761 1671851 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 23:40:30.343870 1671851 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 23:40:30.343942 1671851 start.go:472] detecting cgroup driver to use...
	I1009 23:40:30.343992 1671851 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1009 23:40:30.344304 1671851 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 23:40:30.403817 1671851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 23:40:30.419327 1671851 docker.go:198] disabling cri-docker service (if available) ...
	I1009 23:40:30.419440 1671851 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 23:40:30.437691 1671851 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 23:40:30.455687 1671851 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1009 23:40:30.474523 1671851 docker.go:208] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1009 23:40:30.474651 1671851 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 23:40:30.663233 1671851 docker.go:214] disabling docker service ...
	I1009 23:40:30.663347 1671851 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 23:40:30.678570 1671851 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 23:40:30.691453 1671851 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 23:40:30.828150 1671851 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 23:40:30.973921 1671851 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 23:40:30.989453 1671851 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 23:40:31.012212 1671851 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1009 23:40:31.012301 1671851 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 23:40:31.030528 1671851 out.go:177] 
	W1009 23:40:31.033116 1671851 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1009 23:40:31.033225 1671851 out.go:239] * 
	* 
	W1009 23:40:31.034409 1671851 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 23:40:31.037593 1671851 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-022173 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-10-09 23:40:31.073343633 +0000 UTC m=+2750.789721228
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-022173
helpers_test.go:235: (dbg) docker inspect running-upgrade-022173:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3601e5b099b12c47e81613c39640051dd04e108fd2c89099545f0688540cb797",
	        "Created": "2023-10-09T23:39:41.483895086Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1669310,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-09T23:39:41.966117047Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/3601e5b099b12c47e81613c39640051dd04e108fd2c89099545f0688540cb797/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3601e5b099b12c47e81613c39640051dd04e108fd2c89099545f0688540cb797/hostname",
	        "HostsPath": "/var/lib/docker/containers/3601e5b099b12c47e81613c39640051dd04e108fd2c89099545f0688540cb797/hosts",
	        "LogPath": "/var/lib/docker/containers/3601e5b099b12c47e81613c39640051dd04e108fd2c89099545f0688540cb797/3601e5b099b12c47e81613c39640051dd04e108fd2c89099545f0688540cb797-json.log",
	        "Name": "/running-upgrade-022173",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-022173:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-022173",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bc638934d593c7ce1dfde602881d6e02ae6d3ed9ef18613127b869a04f6cceb3-init/diff:/var/lib/docker/overlay2/c0cacb43013f357d5b48728266f9ec9a8485b3a8123f16ae6f8d14de6a9db49f/diff:/var/lib/docker/overlay2/0d91dc2eddd63cca6b7fcdbc350fb98d3b84506add0e1af6ec319ab41dd823a6/diff:/var/lib/docker/overlay2/2d1dc1da8e04c2471aa90c226b937fdfe0a52e87ff07e6e140d7f3ec5f8ca4d0/diff:/var/lib/docker/overlay2/7ad540f91c8d2b245de3388b369fee1641472d35bdfe40291d89d086d7347fdc/diff:/var/lib/docker/overlay2/cf59d9001ecc749f8c48d06416bc4e97a4ccc8b54524dc7fe70e091f589903df/diff:/var/lib/docker/overlay2/f7bab84a54021a495d9f2152a253a0db279f4a06c46e77a10931d73038de784d/diff:/var/lib/docker/overlay2/243cf64ddf795af8996c9e15020e4e85712a06f0d17a3ca653a0ee78a6ce4b95/diff:/var/lib/docker/overlay2/ae6eb3fbf521e37a7dfe7a4991a99fca22f8cb60347d686d3cefb0adc615ec33/diff:/var/lib/docker/overlay2/155e94e0b646a05b74df4f8efdb8730cb6896861cb177281ced501ceaf4899bd/diff:/var/lib/docker/overlay2/28bb58
0f2644f8dc9fdf9840d2d9785057b9d37f3cd7bf5c8748ebbfc8110aec/diff:/var/lib/docker/overlay2/3bf5bc2b75665462e77b9fdfcac531511648437ce07488324b0911c6d4de184b/diff:/var/lib/docker/overlay2/550c9c9125a6b92baed811670dc6aa4c283bce1d609893cff6d8b8dbb7672279/diff:/var/lib/docker/overlay2/5d28aaca6a283efa8be6df756dbdac6fe95fe2a99bc203378cfd4caa791ff69c/diff:/var/lib/docker/overlay2/c6ecb394b3183f092f22662d139c49d7e5aea5351afaf18ffdcff16633bb3aa0/diff:/var/lib/docker/overlay2/65d6e447fe9736386e573e5b4a67d5a1d259d2f0b2f0ec2d2a06cef35b974160/diff:/var/lib/docker/overlay2/90cfaaacc658f969dd661cecfe0813973608044c302c6dd777804748b2c4dffc/diff:/var/lib/docker/overlay2/4e138a3f15de3b2fe2a879e2237a681a0b263ad4a0f2196d58b3260b92404f6a/diff:/var/lib/docker/overlay2/de66d36f8d7b267fffc16abf9acd84ad6033c3d218b72db2571ba65e9ef0faf1/diff:/var/lib/docker/overlay2/c2c25094426054cfb41f8ca1a051b4473600b8008fde8170929e339610b996f6/diff:/var/lib/docker/overlay2/ddccd9e37e4808ab1976b06c1291c95d514aa65b107809e58b3670ea4a8fe593/diff:/var/lib/d
ocker/overlay2/6ef1fee7cea1811c60767d092c6ff944bfd8a6159f9095d58680136be2d7fcbf/diff:/var/lib/docker/overlay2/e63eef144877cc1a6c64249a84bd267510b4b22069b885af1856d00b059285f0/diff:/var/lib/docker/overlay2/8689feaf8ae5622cc7da1e0e7f0c6aa3d0fecbbc5e523eb808a5930fffaa31d5/diff:/var/lib/docker/overlay2/9ea50caeb86befcbde138a93a5006b883a3f5927489512ad5b2a5315ce28c6f9/diff:/var/lib/docker/overlay2/d2b945bb66245c826053ace381021d0e01b5ba57fcd44c970d32788f29c49ce3/diff:/var/lib/docker/overlay2/08a0e639c0d2787c34234ee736a42fc34191c647d00287739b360c269d129054/diff:/var/lib/docker/overlay2/8417c63f53c76cc40b71bd720109f40b329a6e06eb5c11ba37813ff2087e84e7/diff:/var/lib/docker/overlay2/6bb20fca81a71ad64758e10d71b4f350e2e91ea8ba607ebd184856dc04fbedd0/diff:/var/lib/docker/overlay2/899287a42d28a95618070c32a10a17083874a8f8ab6b63510f86b049f9b41455/diff:/var/lib/docker/overlay2/6301355d3e0a37134f2b2f5c05d797f480ab177c265a0fd4a2bdc690097238c8/diff:/var/lib/docker/overlay2/505ad9df664cf1905925b0d58f7f7c36edcc615865749271d8454b9bd45
11923/diff:/var/lib/docker/overlay2/a26863ee30aac402fb963cf206c46360275bde178ef02205f88516a6d6e87b02/diff:/var/lib/docker/overlay2/690369840cf75d6a187a6c1f3259450bbe41fb01a2794312cf925e867cce8294/diff:/var/lib/docker/overlay2/ee2eaea35c13f9409f88dfca47c2c922aa00d63dd0d64a9c5a4541a2b6138401/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc638934d593c7ce1dfde602881d6e02ae6d3ed9ef18613127b869a04f6cceb3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc638934d593c7ce1dfde602881d6e02ae6d3ed9ef18613127b869a04f6cceb3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc638934d593c7ce1dfde602881d6e02ae6d3ed9ef18613127b869a04f6cceb3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-022173",
	                "Source": "/var/lib/docker/volumes/running-upgrade-022173/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-022173",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-022173",
	                "name.minikube.sigs.k8s.io": "running-upgrade-022173",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ec1e2650d25c8e58b8f3ac8c58b33987d33fa7f04714336d876215c663f0e0dc",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34537"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34536"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34535"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34534"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/ec1e2650d25c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-022173": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.158"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "3601e5b099b1",
	                        "running-upgrade-022173"
	                    ],
	                    "NetworkID": "b9e998df39761c393eb1f75e95b69b6b7e22568bc14e6cfd9c19647291677e83",
	                    "EndpointID": "4538cb835094586f5ac531fcace4c4700edca14004e0f8a7bb4a2a3ede90ac7e",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.158",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:9e",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-022173 -n running-upgrade-022173
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-022173 -n running-upgrade-022173: exit status 4 (532.188617ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 23:40:31.521010 1672433 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-022173" does not appear in /home/jenkins/minikube-integration/17375-1537865/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-022173" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-022173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-022173
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-022173: (2.916358023s)
--- FAIL: TestRunningBinaryUpgrade (78.79s)

                                                
                                    
x
+
TestMissingContainerUpgrade (180.04s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.1214739379.exe start -p missing-upgrade-201150 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.1214739379.exe start -p missing-upgrade-201150 --memory=2200 --driver=docker  --container-runtime=crio: (2m13.726488516s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-201150
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-201150: (2.229318441s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-201150
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-201150 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-201150 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (40.015999915s)

                                                
                                                
-- stdout --
	* [missing-upgrade-201150] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-201150 in cluster missing-upgrade-201150
	* Pulling base image ...
	* docker "missing-upgrade-201150" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 23:33:47.206714 1656299 out.go:296] Setting OutFile to fd 1 ...
	I1009 23:33:47.206893 1656299 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:33:47.206903 1656299 out.go:309] Setting ErrFile to fd 2...
	I1009 23:33:47.206909 1656299 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:33:47.207236 1656299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-1537865/.minikube/bin
	I1009 23:33:47.208230 1656299 out.go:303] Setting JSON to false
	I1009 23:33:47.209846 1656299 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":26171,"bootTime":1696868257,"procs":339,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1009 23:33:47.209927 1656299 start.go:138] virtualization:  
	I1009 23:33:47.215257 1656299 out.go:177] * [missing-upgrade-201150] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1009 23:33:47.217924 1656299 out.go:177]   - MINIKUBE_LOCATION=17375
	I1009 23:33:47.220328 1656299 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 23:33:47.218020 1656299 notify.go:220] Checking for updates...
	I1009 23:33:47.226471 1656299 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig
	I1009 23:33:47.228932 1656299 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	I1009 23:33:47.231948 1656299 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 23:33:47.234169 1656299 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 23:33:47.236853 1656299 config.go:182] Loaded profile config "missing-upgrade-201150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1009 23:33:47.239885 1656299 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I1009 23:33:47.242585 1656299 driver.go:378] Setting default libvirt URI to qemu:///system
	I1009 23:33:47.278058 1656299 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1009 23:33:47.278157 1656299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 23:33:47.425553 1656299 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:53 SystemTime:2023-10-09 23:33:47.407465618 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1009 23:33:47.425664 1656299 docker.go:295] overlay module found
	I1009 23:33:47.429857 1656299 out.go:177] * Using the docker driver based on existing profile
	I1009 23:33:47.432197 1656299 start.go:298] selected driver: docker
	I1009 23:33:47.432220 1656299 start.go:902] validating driver "docker" against &{Name:missing-upgrade-201150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-201150 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.191 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1009 23:33:47.432313 1656299 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 23:33:47.432920 1656299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 23:33:47.549209 1656299 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:53 SystemTime:2023-10-09 23:33:47.532691293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1009 23:33:47.549574 1656299 cni.go:84] Creating CNI manager for ""
	I1009 23:33:47.549596 1656299 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 23:33:47.549609 1656299 start_flags.go:323] config:
	{Name:missing-upgrade-201150 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-201150 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.191 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1009 23:33:47.552278 1656299 out.go:177] * Starting control plane node missing-upgrade-201150 in cluster missing-upgrade-201150
	I1009 23:33:47.554320 1656299 cache.go:122] Beginning downloading kic base image for docker with crio
	I1009 23:33:47.558042 1656299 out.go:177] * Pulling base image ...
	I1009 23:33:47.560426 1656299 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1009 23:33:47.560591 1656299 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1009 23:33:47.584095 1656299 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I1009 23:33:47.584312 1656299 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I1009 23:33:47.584767 1656299 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W1009 23:33:47.674119 1656299 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1009 23:33:47.674277 1656299 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/missing-upgrade-201150/config.json ...
	I1009 23:33:47.674412 1656299 cache.go:107] acquiring lock: {Name:mk547194f29dea59c964eb78b3cd8e0df5ea5528 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:33:47.674497 1656299 cache.go:115] /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1009 23:33:47.674506 1656299 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 101.087µs
	I1009 23:33:47.674665 1656299 cache.go:107] acquiring lock: {Name:mke13f05d30ce0fb18987deeb13e5c57f2f12df8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:33:47.674866 1656299 cache.go:107] acquiring lock: {Name:mkdc4aa1196ced396ac0e67ae116ccdb9bcc1929 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:33:47.675104 1656299 cache.go:107] acquiring lock: {Name:mk8415099262f4fae61bef3a512e73836de3726f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:33:47.675194 1656299 cache.go:107] acquiring lock: {Name:mkb84087b9344d3ebb38a0b9c0a1052ff06b25bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:33:47.675348 1656299 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I1009 23:33:47.675522 1656299 cache.go:107] acquiring lock: {Name:mkc53020ba3f290825df8d311190ba6aa55b0d24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:33:47.675605 1656299 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I1009 23:33:47.675168 1656299 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1009 23:33:47.675771 1656299 cache.go:107] acquiring lock: {Name:mk7f72d4bbe8fdd9109bcd737dfac688966b3c64 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:33:47.675848 1656299 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1009 23:33:47.675944 1656299 cache.go:107] acquiring lock: {Name:mkc50cf68a41acb8ed077d688832a34dd3175437 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:33:47.676016 1656299 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I1009 23:33:47.677135 1656299 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I1009 23:33:47.677593 1656299 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1009 23:33:47.677775 1656299 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I1009 23:33:47.678175 1656299 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I1009 23:33:47.678630 1656299 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1009 23:33:47.678872 1656299 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I1009 23:33:47.679349 1656299 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1009 23:33:47.679774 1656299 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1009 23:33:47.680495 1656299 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1009 23:33:47.680509 1656299 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1009 23:33:48.095301 1656299 cache.go:162] opening:  /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	I1009 23:33:48.149160 1656299 cache.go:162] opening:  /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	W1009 23:33:48.212269 1656299 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I1009 23:33:48.212347 1656299 cache.go:162] opening:  /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I1009 23:33:48.214973 1656299 cache.go:162] opening:  /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W1009 23:33:48.244623 1656299 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I1009 23:33:48.244716 1656299 cache.go:162] opening:  /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	W1009 23:33:48.249961 1656299 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I1009 23:33:48.250073 1656299 cache.go:162] opening:  /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I1009 23:33:48.257239 1656299 cache.go:162] opening:  /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	    > gcr.io/k8s-minikube/kicbase...:  0 B [_______________________] ?% ? p/s ?I1009 23:33:48.412733 1656299 cache.go:157] /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1009 23:33:48.412809 1656299 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 737.623908ms
	I1009 23:33:48.412838 1656299 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  33.36 KiB / 287.99 MiB [>] 0.01% ? p/s ?    > gcr.io/k8s-minikube/kicbase...:  5.60 MiB / 287.99 MiB [>_] 1.94% ? p/s ?I1009 23:33:48.826678 1656299 cache.go:157] /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1009 23:33:48.826706 1656299 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 1.150762559s
	I1009 23:33:48.826727 1656299 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1009 23:33:48.861098 1656299 cache.go:157] /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1009 23:33:48.861173 1656299 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 1.185655789s
	I1009 23:33:48.861199 1656299 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  12.47 MiB / 287.99 MiB  4.33% 20.79 MiB     > gcr.io/k8s-minikube/kicbase...:  23.31 MiB / 287.99 MiB  8.10% 20.79 MiB I1009 23:33:49.284315 1656299 cache.go:157] /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1009 23:33:49.291394 1656299 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.616282954s
	I1009 23:33:49.291475 1656299 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 20.79 MiB I1009 23:33:49.333335 1656299 cache.go:157] /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1009 23:33:49.333365 1656299 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.658503469s
	I1009 23:33:49.333379 1656299 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 20.90 MiB     > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 20.90 MiB     > gcr.io/k8s-minikube/kicbase...:  35.32 MiB / 287.99 MiB  12.26% 20.90 MiB    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 21.48 MiBI1009 23:33:50.144283 1656299 cache.go:157] /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1009 23:33:50.144311 1656299 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 2.469651014s
	I1009 23:33:50.144325 1656299 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  49.63 MiB / 287.99 MiB  17.23% 21.48 MiB    > gcr.io/k8s-minikube/kicbase...:  63.88 MiB / 287.99 MiB  22.18% 21.48 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 22.67 MiB    > gcr.io/k8s-minikube/kicbase...:  74.70 MiB / 287.99 MiB  25.94% 22.67 MiB    > gcr.io/k8s-minikube/kicbase...:  90.60 MiB / 287.99 MiB  31.46% 22.67 MiB    > gcr.io/k8s-minikube/kicbase...:  99.79 MiB / 287.99 MiB  34.65% 24.64 MiB    > gcr.io/k8s-minikube/kicbase...:  109.62 MiB / 287.99 MiB  38.06% 24.64 Mi    > gcr.io/k8s-minikube/kicbase...:  119.22 MiB / 287.99 MiB  41.40% 24.64 Mi    > gcr.io/k8s-minikube/kicbase...:  133.29 MiB / 287.99 MiB  46.28% 26.66 Mi    > gcr.io/k8s-minikube/kicbase...:  147.79 MiB / 287.99 MiB  51.32% 26.66 Mi    > gcr.io/k8s-minikube/kicbase...:  163.37 MiB / 287.99 MiB  56.73% 26.66 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 29.07 Mi    > gcr.io/k8s-minikube/kicbase...:  176.54 MiB / 287.99 MiB  61.
30% 29.07 MiI1009 23:33:52.869531 1656299 cache.go:157] /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1009 23:33:52.869560 1656299 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 5.193790833s
	I1009 23:33:52.869574 1656299 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1009 23:33:52.869591 1656299 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  187.73 MiB / 287.99 MiB  65.18% 29.07 Mi    > gcr.io/k8s-minikube/kicbase...:  202.12 MiB / 287.99 MiB  70.18% 30.46 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 30.46 Mi    > gcr.io/k8s-minikube/kicbase...:  221.36 MiB / 287.99 MiB  76.86% 30.46 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 32.36 Mi    > gcr.io/k8s-minikube/kicbase...:  240.28 MiB / 287.99 MiB  83.43% 32.36 Mi    > gcr.io/k8s-minikube/kicbase...:  258.05 MiB / 287.99 MiB  89.60% 32.36 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 33.18 Mi    > gcr.io/k8s-minikube/kicbase...:  276.25 MiB / 287.99 MiB  95.92% 33.18 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 33.18 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 33.50 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 33.50 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.
99% 33.50 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 31.34 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 31.34 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 31.34 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 29.32 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 36.42 MI1009 23:33:56.206032 1656299 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I1009 23:33:56.206043 1656299 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I1009 23:33:57.307782 1656299 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I1009 23:33:57.307820 1656299 cache.go:195] Successfully downloaded all kic artifacts
	I1009 23:33:57.307860 1656299 start.go:365] acquiring machines lock for missing-upgrade-201150: {Name:mk93e834a5ea1644b295c0cabc829415b061d993 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:33:57.307936 1656299 start.go:369] acquired machines lock for "missing-upgrade-201150" in 56.96µs
	I1009 23:33:57.307960 1656299 start.go:96] Skipping create...Using existing machine configuration
	I1009 23:33:57.307967 1656299 fix.go:54] fixHost starting: 
	I1009 23:33:57.308241 1656299 cli_runner.go:164] Run: docker container inspect missing-upgrade-201150 --format={{.State.Status}}
	W1009 23:33:57.325804 1656299 cli_runner.go:211] docker container inspect missing-upgrade-201150 --format={{.State.Status}} returned with exit code 1
	I1009 23:33:57.325866 1656299 fix.go:102] recreateIfNeeded on missing-upgrade-201150: state= err=unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:33:57.325884 1656299 fix.go:107] machineExists: false. err=machine does not exist
	I1009 23:33:57.330474 1656299 out.go:177] * docker "missing-upgrade-201150" container is missing, will recreate.
	I1009 23:33:57.332773 1656299 delete.go:124] DEMOLISHING missing-upgrade-201150 ...
	I1009 23:33:57.332887 1656299 cli_runner.go:164] Run: docker container inspect missing-upgrade-201150 --format={{.State.Status}}
	W1009 23:33:57.350057 1656299 cli_runner.go:211] docker container inspect missing-upgrade-201150 --format={{.State.Status}} returned with exit code 1
	W1009 23:33:57.350147 1656299 stop.go:75] unable to get state: unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:33:57.350172 1656299 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:33:57.350740 1656299 cli_runner.go:164] Run: docker container inspect missing-upgrade-201150 --format={{.State.Status}}
	W1009 23:33:57.367768 1656299 cli_runner.go:211] docker container inspect missing-upgrade-201150 --format={{.State.Status}} returned with exit code 1
	I1009 23:33:57.367839 1656299 delete.go:82] Unable to get host status for missing-upgrade-201150, assuming it has already been deleted: state: unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:33:57.367909 1656299 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-201150
	W1009 23:33:57.386424 1656299 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-201150 returned with exit code 1
	I1009 23:33:57.386460 1656299 kic.go:368] could not find the container missing-upgrade-201150 to remove it. will try anyways
	I1009 23:33:57.386522 1656299 cli_runner.go:164] Run: docker container inspect missing-upgrade-201150 --format={{.State.Status}}
	W1009 23:33:57.403590 1656299 cli_runner.go:211] docker container inspect missing-upgrade-201150 --format={{.State.Status}} returned with exit code 1
	W1009 23:33:57.403647 1656299 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:33:57.403716 1656299 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-201150 /bin/bash -c "sudo init 0"
	W1009 23:33:57.420765 1656299 cli_runner.go:211] docker exec --privileged -t missing-upgrade-201150 /bin/bash -c "sudo init 0" returned with exit code 1
	I1009 23:33:57.420808 1656299 oci.go:650] error shutdown missing-upgrade-201150: docker exec --privileged -t missing-upgrade-201150 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:33:58.421007 1656299 cli_runner.go:164] Run: docker container inspect missing-upgrade-201150 --format={{.State.Status}}
	W1009 23:33:58.440027 1656299 cli_runner.go:211] docker container inspect missing-upgrade-201150 --format={{.State.Status}} returned with exit code 1
	I1009 23:33:58.440105 1656299 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:33:58.440126 1656299 oci.go:664] temporary error: container missing-upgrade-201150 status is  but expect it to be exited
	I1009 23:33:58.440165 1656299 retry.go:31] will retry after 271.353244ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:33:58.712700 1656299 cli_runner.go:164] Run: docker container inspect missing-upgrade-201150 --format={{.State.Status}}
	W1009 23:33:58.729725 1656299 cli_runner.go:211] docker container inspect missing-upgrade-201150 --format={{.State.Status}} returned with exit code 1
	I1009 23:33:58.729795 1656299 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:33:58.729807 1656299 oci.go:664] temporary error: container missing-upgrade-201150 status is  but expect it to be exited
	I1009 23:33:58.729832 1656299 retry.go:31] will retry after 811.434306ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:33:59.541472 1656299 cli_runner.go:164] Run: docker container inspect missing-upgrade-201150 --format={{.State.Status}}
	W1009 23:33:59.559153 1656299 cli_runner.go:211] docker container inspect missing-upgrade-201150 --format={{.State.Status}} returned with exit code 1
	I1009 23:33:59.559241 1656299 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:33:59.559282 1656299 oci.go:664] temporary error: container missing-upgrade-201150 status is  but expect it to be exited
	I1009 23:33:59.559309 1656299 retry.go:31] will retry after 917.556464ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:34:00.477164 1656299 cli_runner.go:164] Run: docker container inspect missing-upgrade-201150 --format={{.State.Status}}
	W1009 23:34:00.500426 1656299 cli_runner.go:211] docker container inspect missing-upgrade-201150 --format={{.State.Status}} returned with exit code 1
	I1009 23:34:00.500495 1656299 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:34:00.500509 1656299 oci.go:664] temporary error: container missing-upgrade-201150 status is  but expect it to be exited
	I1009 23:34:00.500536 1656299 retry.go:31] will retry after 2.241256371s: couldn't verify container is exited. %v: unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:34:02.742939 1656299 cli_runner.go:164] Run: docker container inspect missing-upgrade-201150 --format={{.State.Status}}
	W1009 23:34:02.761030 1656299 cli_runner.go:211] docker container inspect missing-upgrade-201150 --format={{.State.Status}} returned with exit code 1
	I1009 23:34:02.761091 1656299 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:34:02.761103 1656299 oci.go:664] temporary error: container missing-upgrade-201150 status is  but expect it to be exited
	I1009 23:34:02.761131 1656299 retry.go:31] will retry after 3.005346304s: couldn't verify container is exited. %v: unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:34:05.767252 1656299 cli_runner.go:164] Run: docker container inspect missing-upgrade-201150 --format={{.State.Status}}
	W1009 23:34:05.792942 1656299 cli_runner.go:211] docker container inspect missing-upgrade-201150 --format={{.State.Status}} returned with exit code 1
	I1009 23:34:05.793004 1656299 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:34:05.793016 1656299 oci.go:664] temporary error: container missing-upgrade-201150 status is  but expect it to be exited
	I1009 23:34:05.793041 1656299 retry.go:31] will retry after 3.803099241s: couldn't verify container is exited. %v: unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:34:09.598033 1656299 cli_runner.go:164] Run: docker container inspect missing-upgrade-201150 --format={{.State.Status}}
	W1009 23:34:09.614744 1656299 cli_runner.go:211] docker container inspect missing-upgrade-201150 --format={{.State.Status}} returned with exit code 1
	I1009 23:34:09.614807 1656299 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:34:09.614821 1656299 oci.go:664] temporary error: container missing-upgrade-201150 status is  but expect it to be exited
	I1009 23:34:09.614846 1656299 retry.go:31] will retry after 6.763286959s: couldn't verify container is exited. %v: unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:34:16.379413 1656299 cli_runner.go:164] Run: docker container inspect missing-upgrade-201150 --format={{.State.Status}}
	W1009 23:34:16.398764 1656299 cli_runner.go:211] docker container inspect missing-upgrade-201150 --format={{.State.Status}} returned with exit code 1
	I1009 23:34:16.398826 1656299 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	I1009 23:34:16.398840 1656299 oci.go:664] temporary error: container missing-upgrade-201150 status is  but expect it to be exited
	I1009 23:34:16.398876 1656299 oci.go:88] couldn't shut down missing-upgrade-201150 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-201150": docker container inspect missing-upgrade-201150 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-201150
	 
	I1009 23:34:16.398940 1656299 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-201150
	I1009 23:34:16.424103 1656299 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-201150
	W1009 23:34:16.442290 1656299 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-201150 returned with exit code 1
	I1009 23:34:16.442379 1656299 cli_runner.go:164] Run: docker network inspect missing-upgrade-201150 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 23:34:16.461770 1656299 cli_runner.go:164] Run: docker network rm missing-upgrade-201150
	I1009 23:34:16.626035 1656299 fix.go:114] Sleeping 1 second for extra luck!
	I1009 23:34:17.627113 1656299 start.go:125] createHost starting for "" (driver="docker")
	I1009 23:34:17.630907 1656299 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1009 23:34:17.631064 1656299 start.go:159] libmachine.API.Create for "missing-upgrade-201150" (driver="docker")
	I1009 23:34:17.631089 1656299 client.go:168] LocalClient.Create starting
	I1009 23:34:17.631189 1656299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem
	I1009 23:34:17.631224 1656299 main.go:141] libmachine: Decoding PEM data...
	I1009 23:34:17.631239 1656299 main.go:141] libmachine: Parsing certificate...
	I1009 23:34:17.631295 1656299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem
	I1009 23:34:17.631314 1656299 main.go:141] libmachine: Decoding PEM data...
	I1009 23:34:17.631330 1656299 main.go:141] libmachine: Parsing certificate...
	I1009 23:34:17.631584 1656299 cli_runner.go:164] Run: docker network inspect missing-upgrade-201150 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 23:34:17.654889 1656299 cli_runner.go:211] docker network inspect missing-upgrade-201150 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 23:34:17.654964 1656299 network_create.go:281] running [docker network inspect missing-upgrade-201150] to gather additional debugging logs...
	I1009 23:34:17.654984 1656299 cli_runner.go:164] Run: docker network inspect missing-upgrade-201150
	W1009 23:34:17.675553 1656299 cli_runner.go:211] docker network inspect missing-upgrade-201150 returned with exit code 1
	I1009 23:34:17.675581 1656299 network_create.go:284] error running [docker network inspect missing-upgrade-201150]: docker network inspect missing-upgrade-201150: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-201150 not found
	I1009 23:34:17.675596 1656299 network_create.go:286] output of [docker network inspect missing-upgrade-201150]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-201150 not found
	
	** /stderr **
	I1009 23:34:17.675692 1656299 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 23:34:17.707895 1656299 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bbbaf27e04e4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:09:6a:d9:0c} reservation:<nil>}
	I1009 23:34:17.708221 1656299 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7fa9be4abd6f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:46:ca:1e:75} reservation:<nil>}
	I1009 23:34:17.708540 1656299 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8baf6551b8a0 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:fe:8e:67:05} reservation:<nil>}
	I1009 23:34:17.708961 1656299 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40028c50f0}
	I1009 23:34:17.708979 1656299 network_create.go:124] attempt to create docker network missing-upgrade-201150 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1009 23:34:17.709040 1656299 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-201150 missing-upgrade-201150
	I1009 23:34:17.796821 1656299 network_create.go:108] docker network missing-upgrade-201150 192.168.76.0/24 created
	I1009 23:34:17.796851 1656299 kic.go:118] calculated static IP "192.168.76.2" for the "missing-upgrade-201150" container
	I1009 23:34:17.796925 1656299 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 23:34:17.821045 1656299 cli_runner.go:164] Run: docker volume create missing-upgrade-201150 --label name.minikube.sigs.k8s.io=missing-upgrade-201150 --label created_by.minikube.sigs.k8s.io=true
	I1009 23:34:17.844237 1656299 oci.go:103] Successfully created a docker volume missing-upgrade-201150
	I1009 23:34:17.844331 1656299 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-201150-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-201150 --entrypoint /usr/bin/test -v missing-upgrade-201150:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I1009 23:34:19.489752 1656299 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-201150-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-201150 --entrypoint /usr/bin/test -v missing-upgrade-201150:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib: (1.645382828s)
	I1009 23:34:19.489791 1656299 oci.go:107] Successfully prepared a docker volume missing-upgrade-201150
	I1009 23:34:19.489804 1656299 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W1009 23:34:19.489930 1656299 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 23:34:19.490026 1656299 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 23:34:19.601219 1656299 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-201150 --name missing-upgrade-201150 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-201150 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-201150 --network missing-upgrade-201150 --ip 192.168.76.2 --volume missing-upgrade-201150:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I1009 23:34:20.059452 1656299 cli_runner.go:164] Run: docker container inspect missing-upgrade-201150 --format={{.State.Running}}
	I1009 23:34:20.083498 1656299 cli_runner.go:164] Run: docker container inspect missing-upgrade-201150 --format={{.State.Status}}
	I1009 23:34:20.113256 1656299 cli_runner.go:164] Run: docker exec missing-upgrade-201150 stat /var/lib/dpkg/alternatives/iptables
	I1009 23:34:20.207703 1656299 oci.go:144] the created container "missing-upgrade-201150" has a running status.
	I1009 23:34:20.207741 1656299 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/missing-upgrade-201150/id_rsa...
	I1009 23:34:21.404002 1656299 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/missing-upgrade-201150/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 23:34:21.434927 1656299 cli_runner.go:164] Run: docker container inspect missing-upgrade-201150 --format={{.State.Status}}
	I1009 23:34:21.461114 1656299 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 23:34:21.461136 1656299 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-201150 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 23:34:21.547472 1656299 cli_runner.go:164] Run: docker container inspect missing-upgrade-201150 --format={{.State.Status}}
	I1009 23:34:21.585324 1656299 machine.go:88] provisioning docker machine ...
	I1009 23:34:21.585362 1656299 ubuntu.go:169] provisioning hostname "missing-upgrade-201150"
	I1009 23:34:21.585427 1656299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-201150
	I1009 23:34:21.632427 1656299 main.go:141] libmachine: Using SSH client type: native
	I1009 23:34:21.632855 1656299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34533 <nil> <nil>}
	I1009 23:34:21.632868 1656299 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-201150 && echo "missing-upgrade-201150" | sudo tee /etc/hostname
	I1009 23:34:21.838877 1656299 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-201150
	
	I1009 23:34:21.853958 1656299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-201150
	I1009 23:34:21.881065 1656299 main.go:141] libmachine: Using SSH client type: native
	I1009 23:34:21.881509 1656299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34533 <nil> <nil>}
	I1009 23:34:21.881532 1656299 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-201150' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-201150/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-201150' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 23:34:22.034400 1656299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 23:34:22.034426 1656299 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17375-1537865/.minikube CaCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17375-1537865/.minikube}
	I1009 23:34:22.034459 1656299 ubuntu.go:177] setting up certificates
	I1009 23:34:22.034477 1656299 provision.go:83] configureAuth start
	I1009 23:34:22.034552 1656299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-201150
	I1009 23:34:22.067356 1656299 provision.go:138] copyHostCerts
	I1009 23:34:22.067417 1656299 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem, removing ...
	I1009 23:34:22.067431 1656299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem
	I1009 23:34:22.067507 1656299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem (1078 bytes)
	I1009 23:34:22.067609 1656299 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem, removing ...
	I1009 23:34:22.067620 1656299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem
	I1009 23:34:22.067652 1656299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem (1123 bytes)
	I1009 23:34:22.067707 1656299 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem, removing ...
	I1009 23:34:22.067716 1656299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem
	I1009 23:34:22.067740 1656299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem (1679 bytes)
	I1009 23:34:22.067786 1656299 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-201150 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-201150]
	I1009 23:34:22.617034 1656299 provision.go:172] copyRemoteCerts
	I1009 23:34:22.617106 1656299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 23:34:22.617158 1656299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-201150
	I1009 23:34:22.642025 1656299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34533 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/missing-upgrade-201150/id_rsa Username:docker}
	I1009 23:34:22.745327 1656299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 23:34:22.771160 1656299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1009 23:34:22.795222 1656299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 23:34:22.823946 1656299 provision.go:86] duration metric: configureAuth took 789.451986ms
	I1009 23:34:22.823969 1656299 ubuntu.go:193] setting minikube options for container-runtime
	I1009 23:34:22.824162 1656299 config.go:182] Loaded profile config "missing-upgrade-201150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1009 23:34:22.824280 1656299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-201150
	I1009 23:34:22.852666 1656299 main.go:141] libmachine: Using SSH client type: native
	I1009 23:34:22.853088 1656299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34533 <nil> <nil>}
	I1009 23:34:22.853104 1656299 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 23:34:23.371618 1656299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 23:34:23.371642 1656299 machine.go:91] provisioned docker machine in 1.786291745s
	I1009 23:34:23.371653 1656299 client.go:171] LocalClient.Create took 5.740557646s
	I1009 23:34:23.371696 1656299 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-201150" took 5.740631911s
	I1009 23:34:23.371709 1656299 start.go:300] post-start starting for "missing-upgrade-201150" (driver="docker")
	I1009 23:34:23.371720 1656299 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 23:34:23.371798 1656299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 23:34:23.371870 1656299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-201150
	I1009 23:34:23.389913 1656299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34533 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/missing-upgrade-201150/id_rsa Username:docker}
	I1009 23:34:23.490350 1656299 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 23:34:23.494403 1656299 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1009 23:34:23.494434 1656299 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 23:34:23.494452 1656299 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1009 23:34:23.494460 1656299 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1009 23:34:23.494485 1656299 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-1537865/.minikube/addons for local assets ...
	I1009 23:34:23.494564 1656299 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-1537865/.minikube/files for local assets ...
	I1009 23:34:23.494649 1656299 filesync.go:149] local asset: /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem -> 15432152.pem in /etc/ssl/certs
	I1009 23:34:23.494776 1656299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 23:34:23.503804 1656299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem --> /etc/ssl/certs/15432152.pem (1708 bytes)
	I1009 23:34:23.527478 1656299 start.go:303] post-start completed in 155.752361ms
	I1009 23:34:23.527854 1656299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-201150
	I1009 23:34:23.545495 1656299 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/missing-upgrade-201150/config.json ...
	I1009 23:34:23.545789 1656299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 23:34:23.545839 1656299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-201150
	I1009 23:34:23.563674 1656299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34533 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/missing-upgrade-201150/id_rsa Username:docker}
	I1009 23:34:23.662062 1656299 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 23:34:23.668289 1656299 start.go:128] duration metric: createHost completed in 6.041070505s
	I1009 23:34:23.668383 1656299 cli_runner.go:164] Run: docker container inspect missing-upgrade-201150 --format={{.State.Status}}
	W1009 23:34:23.694025 1656299 fix.go:128] unexpected machine state, will restart: <nil>
	I1009 23:34:23.694060 1656299 machine.go:88] provisioning docker machine ...
	I1009 23:34:23.694078 1656299 ubuntu.go:169] provisioning hostname "missing-upgrade-201150"
	I1009 23:34:23.694180 1656299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-201150
	I1009 23:34:23.722875 1656299 main.go:141] libmachine: Using SSH client type: native
	I1009 23:34:23.723485 1656299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34533 <nil> <nil>}
	I1009 23:34:23.723509 1656299 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-201150 && echo "missing-upgrade-201150" | sudo tee /etc/hostname
	I1009 23:34:23.897682 1656299 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-201150
	
	I1009 23:34:23.897784 1656299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-201150
	I1009 23:34:23.927497 1656299 main.go:141] libmachine: Using SSH client type: native
	I1009 23:34:23.927898 1656299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34533 <nil> <nil>}
	I1009 23:34:23.927920 1656299 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-201150' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-201150/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-201150' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 23:34:24.084984 1656299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 23:34:24.085075 1656299 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17375-1537865/.minikube CaCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17375-1537865/.minikube}
	I1009 23:34:24.085130 1656299 ubuntu.go:177] setting up certificates
	I1009 23:34:24.085156 1656299 provision.go:83] configureAuth start
	I1009 23:34:24.085253 1656299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-201150
	I1009 23:34:24.116998 1656299 provision.go:138] copyHostCerts
	I1009 23:34:24.117070 1656299 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem, removing ...
	I1009 23:34:24.117087 1656299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem
	I1009 23:34:24.117169 1656299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem (1078 bytes)
	I1009 23:34:24.117259 1656299 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem, removing ...
	I1009 23:34:24.117264 1656299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem
	I1009 23:34:24.117289 1656299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem (1123 bytes)
	I1009 23:34:24.117359 1656299 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem, removing ...
	I1009 23:34:24.117364 1656299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem
	I1009 23:34:24.117389 1656299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem (1679 bytes)
	I1009 23:34:24.117439 1656299 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-201150 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-201150]
	I1009 23:34:24.487301 1656299 provision.go:172] copyRemoteCerts
	I1009 23:34:24.487415 1656299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 23:34:24.487471 1656299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-201150
	I1009 23:34:24.515096 1656299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34533 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/missing-upgrade-201150/id_rsa Username:docker}
	I1009 23:34:24.622902 1656299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 23:34:24.667544 1656299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1009 23:34:24.704322 1656299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 23:34:24.751488 1656299 provision.go:86] duration metric: configureAuth took 666.299753ms
	I1009 23:34:24.751556 1656299 ubuntu.go:193] setting minikube options for container-runtime
	I1009 23:34:24.751757 1656299 config.go:182] Loaded profile config "missing-upgrade-201150": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1009 23:34:24.751892 1656299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-201150
	I1009 23:34:24.784581 1656299 main.go:141] libmachine: Using SSH client type: native
	I1009 23:34:24.784990 1656299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34533 <nil> <nil>}
	I1009 23:34:24.785006 1656299 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 23:34:25.244972 1656299 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 23:34:25.245034 1656299 machine.go:91] provisioned docker machine in 1.550965332s
	I1009 23:34:25.245059 1656299 start.go:300] post-start starting for "missing-upgrade-201150" (driver="docker")
	I1009 23:34:25.245089 1656299 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 23:34:25.245173 1656299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 23:34:25.245271 1656299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-201150
	I1009 23:34:25.279083 1656299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34533 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/missing-upgrade-201150/id_rsa Username:docker}
	I1009 23:34:25.402767 1656299 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 23:34:25.406914 1656299 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1009 23:34:25.406938 1656299 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 23:34:25.406949 1656299 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1009 23:34:25.406959 1656299 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1009 23:34:25.406970 1656299 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-1537865/.minikube/addons for local assets ...
	I1009 23:34:25.407026 1656299 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-1537865/.minikube/files for local assets ...
	I1009 23:34:25.407102 1656299 filesync.go:149] local asset: /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem -> 15432152.pem in /etc/ssl/certs
	I1009 23:34:25.407263 1656299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 23:34:25.425750 1656299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem --> /etc/ssl/certs/15432152.pem (1708 bytes)
	I1009 23:34:25.464321 1656299 start.go:303] post-start completed in 219.226744ms
	I1009 23:34:25.464441 1656299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 23:34:25.464523 1656299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-201150
	I1009 23:34:25.502795 1656299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34533 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/missing-upgrade-201150/id_rsa Username:docker}
	I1009 23:34:25.615789 1656299 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 23:34:25.628167 1656299 fix.go:56] fixHost completed within 28.320192324s
	I1009 23:34:25.628194 1656299 start.go:83] releasing machines lock for "missing-upgrade-201150", held for 28.320248743s
	I1009 23:34:25.628271 1656299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-201150
	I1009 23:34:25.662339 1656299 ssh_runner.go:195] Run: cat /version.json
	I1009 23:34:25.662393 1656299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-201150
	I1009 23:34:25.662641 1656299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 23:34:25.662721 1656299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-201150
	I1009 23:34:25.700470 1656299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34533 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/missing-upgrade-201150/id_rsa Username:docker}
	I1009 23:34:25.714804 1656299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34533 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/missing-upgrade-201150/id_rsa Username:docker}
	W1009 23:34:25.837776 1656299 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1009 23:34:25.837897 1656299 ssh_runner.go:195] Run: systemctl --version
	I1009 23:34:25.962174 1656299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 23:34:26.132516 1656299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 23:34:26.143735 1656299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 23:34:26.193368 1656299 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1009 23:34:26.193473 1656299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 23:34:26.256399 1656299 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1009 23:34:26.256432 1656299 start.go:472] detecting cgroup driver to use...
	I1009 23:34:26.256464 1656299 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1009 23:34:26.256530 1656299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 23:34:26.299872 1656299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 23:34:26.322775 1656299 docker.go:198] disabling cri-docker service (if available) ...
	I1009 23:34:26.322865 1656299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 23:34:26.338098 1656299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 23:34:26.361654 1656299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1009 23:34:26.384709 1656299 docker.go:208] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1009 23:34:26.384784 1656299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 23:34:26.570913 1656299 docker.go:214] disabling docker service ...
	I1009 23:34:26.571041 1656299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 23:34:26.597571 1656299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 23:34:26.617499 1656299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 23:34:26.813329 1656299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 23:34:27.013390 1656299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 23:34:27.028856 1656299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 23:34:27.058690 1656299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1009 23:34:27.058821 1656299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 23:34:27.093697 1656299 out.go:177] 
	W1009 23:34:27.095711 1656299 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1009 23:34:27.095874 1656299 out.go:239] * 
	* 
	W1009 23:34:27.097566 1656299 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1009 23:34:27.099615 1656299 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-201150 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2023-10-09 23:34:27.179646088 +0000 UTC m=+2386.896023691
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-201150
helpers_test.go:235: (dbg) docker inspect missing-upgrade-201150:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "798ef8c84358b6fcb5da48ba44afca68f96d8f14e58a59a72192687cf23ebaac",
	        "Created": "2023-10-09T23:34:19.619861717Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1658419,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-10-09T23:34:20.050329496Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/798ef8c84358b6fcb5da48ba44afca68f96d8f14e58a59a72192687cf23ebaac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/798ef8c84358b6fcb5da48ba44afca68f96d8f14e58a59a72192687cf23ebaac/hostname",
	        "HostsPath": "/var/lib/docker/containers/798ef8c84358b6fcb5da48ba44afca68f96d8f14e58a59a72192687cf23ebaac/hosts",
	        "LogPath": "/var/lib/docker/containers/798ef8c84358b6fcb5da48ba44afca68f96d8f14e58a59a72192687cf23ebaac/798ef8c84358b6fcb5da48ba44afca68f96d8f14e58a59a72192687cf23ebaac-json.log",
	        "Name": "/missing-upgrade-201150",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-201150:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-201150",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5d4d8b92db5b56c0d407ee12b2bb2d7b566682a3b0f9c22b55e6343f9fddb20d-init/diff:/var/lib/docker/overlay2/c0cacb43013f357d5b48728266f9ec9a8485b3a8123f16ae6f8d14de6a9db49f/diff:/var/lib/docker/overlay2/0d91dc2eddd63cca6b7fcdbc350fb98d3b84506add0e1af6ec319ab41dd823a6/diff:/var/lib/docker/overlay2/2d1dc1da8e04c2471aa90c226b937fdfe0a52e87ff07e6e140d7f3ec5f8ca4d0/diff:/var/lib/docker/overlay2/7ad540f91c8d2b245de3388b369fee1641472d35bdfe40291d89d086d7347fdc/diff:/var/lib/docker/overlay2/cf59d9001ecc749f8c48d06416bc4e97a4ccc8b54524dc7fe70e091f589903df/diff:/var/lib/docker/overlay2/f7bab84a54021a495d9f2152a253a0db279f4a06c46e77a10931d73038de784d/diff:/var/lib/docker/overlay2/243cf64ddf795af8996c9e15020e4e85712a06f0d17a3ca653a0ee78a6ce4b95/diff:/var/lib/docker/overlay2/ae6eb3fbf521e37a7dfe7a4991a99fca22f8cb60347d686d3cefb0adc615ec33/diff:/var/lib/docker/overlay2/155e94e0b646a05b74df4f8efdb8730cb6896861cb177281ced501ceaf4899bd/diff:/var/lib/docker/overlay2/28bb58
0f2644f8dc9fdf9840d2d9785057b9d37f3cd7bf5c8748ebbfc8110aec/diff:/var/lib/docker/overlay2/3bf5bc2b75665462e77b9fdfcac531511648437ce07488324b0911c6d4de184b/diff:/var/lib/docker/overlay2/550c9c9125a6b92baed811670dc6aa4c283bce1d609893cff6d8b8dbb7672279/diff:/var/lib/docker/overlay2/5d28aaca6a283efa8be6df756dbdac6fe95fe2a99bc203378cfd4caa791ff69c/diff:/var/lib/docker/overlay2/c6ecb394b3183f092f22662d139c49d7e5aea5351afaf18ffdcff16633bb3aa0/diff:/var/lib/docker/overlay2/65d6e447fe9736386e573e5b4a67d5a1d259d2f0b2f0ec2d2a06cef35b974160/diff:/var/lib/docker/overlay2/90cfaaacc658f969dd661cecfe0813973608044c302c6dd777804748b2c4dffc/diff:/var/lib/docker/overlay2/4e138a3f15de3b2fe2a879e2237a681a0b263ad4a0f2196d58b3260b92404f6a/diff:/var/lib/docker/overlay2/de66d36f8d7b267fffc16abf9acd84ad6033c3d218b72db2571ba65e9ef0faf1/diff:/var/lib/docker/overlay2/c2c25094426054cfb41f8ca1a051b4473600b8008fde8170929e339610b996f6/diff:/var/lib/docker/overlay2/ddccd9e37e4808ab1976b06c1291c95d514aa65b107809e58b3670ea4a8fe593/diff:/var/lib/d
ocker/overlay2/6ef1fee7cea1811c60767d092c6ff944bfd8a6159f9095d58680136be2d7fcbf/diff:/var/lib/docker/overlay2/e63eef144877cc1a6c64249a84bd267510b4b22069b885af1856d00b059285f0/diff:/var/lib/docker/overlay2/8689feaf8ae5622cc7da1e0e7f0c6aa3d0fecbbc5e523eb808a5930fffaa31d5/diff:/var/lib/docker/overlay2/9ea50caeb86befcbde138a93a5006b883a3f5927489512ad5b2a5315ce28c6f9/diff:/var/lib/docker/overlay2/d2b945bb66245c826053ace381021d0e01b5ba57fcd44c970d32788f29c49ce3/diff:/var/lib/docker/overlay2/08a0e639c0d2787c34234ee736a42fc34191c647d00287739b360c269d129054/diff:/var/lib/docker/overlay2/8417c63f53c76cc40b71bd720109f40b329a6e06eb5c11ba37813ff2087e84e7/diff:/var/lib/docker/overlay2/6bb20fca81a71ad64758e10d71b4f350e2e91ea8ba607ebd184856dc04fbedd0/diff:/var/lib/docker/overlay2/899287a42d28a95618070c32a10a17083874a8f8ab6b63510f86b049f9b41455/diff:/var/lib/docker/overlay2/6301355d3e0a37134f2b2f5c05d797f480ab177c265a0fd4a2bdc690097238c8/diff:/var/lib/docker/overlay2/505ad9df664cf1905925b0d58f7f7c36edcc615865749271d8454b9bd45
11923/diff:/var/lib/docker/overlay2/a26863ee30aac402fb963cf206c46360275bde178ef02205f88516a6d6e87b02/diff:/var/lib/docker/overlay2/690369840cf75d6a187a6c1f3259450bbe41fb01a2794312cf925e867cce8294/diff:/var/lib/docker/overlay2/ee2eaea35c13f9409f88dfca47c2c922aa00d63dd0d64a9c5a4541a2b6138401/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5d4d8b92db5b56c0d407ee12b2bb2d7b566682a3b0f9c22b55e6343f9fddb20d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5d4d8b92db5b56c0d407ee12b2bb2d7b566682a3b0f9c22b55e6343f9fddb20d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5d4d8b92db5b56c0d407ee12b2bb2d7b566682a3b0f9c22b55e6343f9fddb20d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-201150",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-201150/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-201150",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-201150",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-201150",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d097549db4c2fc5230a0bc5b3def0444cd65b8053883944534c126a500c8d8b7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34533"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34532"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34529"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34531"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34530"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d097549db4c2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-201150": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "798ef8c84358",
	                        "missing-upgrade-201150"
	                    ],
	                    "NetworkID": "2367dc30161815506e76410c72b4b32b2af79b454283ad754f69ff8ba21216b1",
	                    "EndpointID": "a6b533be44ed0264cfb75527bb54c868edccbd6910958afba2b6338e7b15c833",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-201150 -n missing-upgrade-201150
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-201150 -n missing-upgrade-201150: exit status 6 (539.501305ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 23:34:27.764128 1659632 status.go:415] kubeconfig endpoint: got: 192.168.59.191:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-201150" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-201150" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-201150
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-201150: (2.047280614s)
--- FAIL: TestMissingContainerUpgrade (180.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (409.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.3499164679.exe start -p stopped-upgrade-991232 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.3499164679.exe start -p stopped-upgrade-991232 --memory=2200 --vm-driver=docker  --container-runtime=crio: exit status 80 (22.299885124s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-991232] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig149497843
	* Using the docker driver based on user configuration
	* Starting control plane node stopped-upgrade-991232 in cluster stopped-upgrade-991232
	* Downloading Kubernetes v1.20.2 preload ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v8-v1....: 5.63 MiB / 579.81 MiB [>__] 0.97% ? p/s ?    > preloaded-images-k8s-v8-v1....: 15.98 MiB / 579.81 MiB [>_] 2.76% ? p/s ?    > preloaded-images-k8s-v8-v1....: 24.62 MiB / 579.81 MiB [>_] 4.25% ? p/s ?    > preloaded-images-k8s-v8-v1....: 32.89 MiB / 579.81 MiB  5.67% 45.43 MiB p    > preloaded-images-k8s-v8-v1....: 40.59 MiB / 579.81 MiB  7.00% 45.43 MiB p    > preloaded-images-k8s-v8-v1....: 49.32 MiB / 579.81 MiB  8.51% 45.43 MiB p    > preloaded-images-k8s-v8-v1....: 58.67 MiB / 579.81 MiB  10.12% 45.27 MiB     > preloaded-images-k8s-v8-v1....: 68.02 MiB / 579.81 MiB  11.73% 45.27 MiB     > preloaded-images-k8s-v8-v1....: 79.91 MiB / 579.81 MiB  13.78% 45.27 MiB     > preloaded-images-k8s-v8-v1....: 91.30 MiB / 579.81 MiB  15.75% 45.86 MiB     > preloaded-images-k8s-v8-v1....: 101.52 MiB / 579.81 MiB  17.51% 45.86 MiB    > preloaded-images-k8s-v8-v1....: 109.90 MiB / 579.81 MiB  18.95% 45.86 MiB    > preloaded-images-k8s-v8-v1....: 119.79 MiB / 579.81 MiB  20.6
6% 45.96 MiB    > preloaded-images-k8s-v8-v1....: 130.58 MiB / 579.81 MiB  22.52% 45.96 MiB    > preloaded-images-k8s-v8-v1....: 139.42 MiB / 579.81 MiB  24.05% 45.96 MiB    > preloaded-images-k8s-v8-v1....: 153.17 MiB / 579.81 MiB  26.42% 46.59 MiB    > preloaded-images-k8s-v8-v1....: 166.45 MiB / 579.81 MiB  28.71% 46.59 MiB    > preloaded-images-k8s-v8-v1....: 177.06 MiB / 579.81 MiB  30.54% 46.59 MiB    > preloaded-images-k8s-v8-v1....: 184.27 MiB / 579.81 MiB  31.78% 46.93 MiB    > preloaded-images-k8s-v8-v1....: 195.83 MiB / 579.81 MiB  33.77% 46.93 MiB    > preloaded-images-k8s-v8-v1....: 203.90 MiB / 579.81 MiB  35.17% 46.93 MiB    > preloaded-images-k8s-v8-v1....: 214.83 MiB / 579.81 MiB  37.05% 47.18 MiB    > preloaded-images-k8s-v8-v1....: 225.73 MiB / 579.81 MiB  38.93% 47.18 MiB    > preloaded-images-k8s-v8-v1....: 231.81 MiB / 579.81 MiB  39.98% 47.18 MiB    > preloaded-images-k8s-v8-v1....: 242.60 MiB / 579.81 MiB  41.84% 47.13 MiB    > preloaded-images-k8s-v8-v1....: 255.45 MiB / 579.81 MiB  4
4.06% 47.13 MiB    > preloaded-images-k8s-v8-v1....: 264.14 MiB / 579.81 MiB  45.56% 47.13 MiB    > preloaded-images-k8s-v8-v1....: 274.70 MiB / 579.81 MiB  47.38% 47.54 MiB    > preloaded-images-k8s-v8-v1....: 283.61 MiB / 579.81 MiB  48.91% 47.54 MiB    > preloaded-images-k8s-v8-v1....: 292.96 MiB / 579.81 MiB  50.53% 47.54 MiB    > preloaded-images-k8s-v8-v1....: 299.64 MiB / 579.81 MiB  51.68% 47.15 MiB    > preloaded-images-k8s-v8-v1....: 307.25 MiB / 579.81 MiB  52.99% 47.15 MiB    > preloaded-images-k8s-v8-v1....: 316.41 MiB / 579.81 MiB  54.57% 47.15 MiB    > preloaded-images-k8s-v8-v1....: 323.52 MiB / 579.81 MiB  55.80% 46.68 MiB    > preloaded-images-k8s-v8-v1....: 332.31 MiB / 579.81 MiB  57.31% 46.68 MiB    > preloaded-images-k8s-v8-v1....: 341.76 MiB / 579.81 MiB  58.94% 46.68 MiB    > preloaded-images-k8s-v8-v1....: 349.99 MiB / 579.81 MiB  60.36% 46.51 MiB    > preloaded-images-k8s-v8-v1....: 359.31 MiB / 579.81 MiB  61.97% 46.51 MiB    > preloaded-images-k8s-v8-v1....: 369.01 MiB / 579.81 MiB
63.64% 46.51 MiB    > preloaded-images-k8s-v8-v1....: 379.77 MiB / 579.81 MiB  65.50% 46.71 MiB    > preloaded-images-k8s-v8-v1....: 391.03 MiB / 579.81 MiB  67.44% 46.71 MiB    > preloaded-images-k8s-v8-v1....: 402.05 MiB / 579.81 MiB  69.34% 46.71 MiB    > preloaded-images-k8s-v8-v1....: 411.19 MiB / 579.81 MiB  70.92% 47.08 MiB    > preloaded-images-k8s-v8-v1....: 425.88 MiB / 579.81 MiB  73.45% 47.08 MiB    > preloaded-images-k8s-v8-v1....: 436.58 MiB / 579.81 MiB  75.30% 47.08 MiB    > preloaded-images-k8s-v8-v1....: 450.77 MiB / 579.81 MiB  77.75% 48.30 MiB    > preloaded-images-k8s-v8-v1....: 461.06 MiB / 579.81 MiB  79.52% 48.30 MiB    > preloaded-images-k8s-v8-v1....: 470.48 MiB / 579.81 MiB  81.14% 48.30 MiB    > preloaded-images-k8s-v8-v1....: 480.79 MiB / 579.81 MiB  82.92% 48.41 MiB    > preloaded-images-k8s-v8-v1....: 491.70 MiB / 579.81 MiB  84.80% 48.41 MiB    > preloaded-images-k8s-v8-v1....: 501.18 MiB / 579.81 MiB  86.44% 48.41 MiB    > preloaded-images-k8s-v8-v1....: 510.01 MiB / 579.81
MiB  87.96% 48.43 MiB    > preloaded-images-k8s-v8-v1....: 518.49 MiB / 579.81 MiB  89.42% 48.43 MiB    > preloaded-images-k8s-v8-v1....: 527.54 MiB / 579.81 MiB  90.98% 48.43 MiB    > preloaded-images-k8s-v8-v1....: 537.83 MiB / 579.81 MiB  92.76% 48.30 MiB    > preloaded-images-k8s-v8-v1....: 548.28 MiB / 579.81 MiB  94.56% 48.30 MiB    > preloaded-images-k8s-v8-v1....: 560.89 MiB / 579.81 MiB  96.74% 48.30 MiB    > preloaded-images-k8s-v8-v1....: 569.82 MiB / 579.81 MiB  98.28% 48.62 MiB    > preloaded-images-k8s-v8-v1....: 577.83 MiB / 579.81 MiB  99.66% 48.62 MiB    > preloaded-images-k8s-v8-v1....: 579.81 MiB / 579.81 MiB  100.00% 49.92 MiX Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.3499164679.exe start -p stopped-upgrade-991232 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1009 23:35:56.385682 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
E1009 23:36:11.758150 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
E1009 23:37:53.339623 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.3499164679.exe start -p stopped-upgrade-991232 --memory=2200 --vm-driver=docker  --container-runtime=crio: exit status 80 (3m11.856109365s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-991232] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig836376573
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-991232 in cluster stopped-upgrade-991232
	* docker "stopped-upgrade-991232" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.3499164679.exe start -p stopped-upgrade-991232 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Non-zero exit: /tmp/minikube-v1.17.0.3499164679.exe start -p stopped-upgrade-991232 --memory=2200 --vm-driver=docker  --container-runtime=crio: exit status 80 (3m12.432636013s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-991232] minikube v1.17.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig909582651
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-991232 in cluster stopped-upgrade-991232
	* docker "stopped-upgrade-991232" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
/ WW- WW\ WW| WW/ WW- WW\ WW| WW/ WW- WW\ WW| WW
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use
	* 
	* If the above advice does not help, please let us know: 
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:202: legacy v1.17.0 start failed: exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (409.49s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-991232
version_upgrade_test.go:219: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p stopped-upgrade-991232: exit status 85 (142.931898ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-717678 ssh -n multinode-717678 sudo cat                                       | multinode-717678            | jenkins | v1.31.2 | 09 Oct 23 23:21 UTC | 09 Oct 23 23:21 UTC |
	|         | /home/docker/cp-test_multinode-717678-m03_multinode-717678.txt                          |                             |         |         |                     |                     |
	| cp      | multinode-717678 cp multinode-717678-m03:/home/docker/cp-test.txt                       | multinode-717678            | jenkins | v1.31.2 | 09 Oct 23 23:21 UTC | 09 Oct 23 23:21 UTC |
	|         | multinode-717678-m02:/home/docker/cp-test_multinode-717678-m03_multinode-717678-m02.txt |                             |         |         |                     |                     |
	| ssh     | multinode-717678 ssh -n                                                                 | multinode-717678            | jenkins | v1.31.2 | 09 Oct 23 23:21 UTC | 09 Oct 23 23:21 UTC |
	|         | multinode-717678-m03 sudo cat                                                           |                             |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                             |         |         |                     |                     |
	| ssh     | multinode-717678 ssh -n multinode-717678-m02 sudo cat                                   | multinode-717678            | jenkins | v1.31.2 | 09 Oct 23 23:21 UTC | 09 Oct 23 23:21 UTC |
	|         | /home/docker/cp-test_multinode-717678-m03_multinode-717678-m02.txt                      |                             |         |         |                     |                     |
	| node    | multinode-717678 node stop m03                                                          | multinode-717678            | jenkins | v1.31.2 | 09 Oct 23 23:21 UTC | 09 Oct 23 23:21 UTC |
	| node    | multinode-717678 node start                                                             | multinode-717678            | jenkins | v1.31.2 | 09 Oct 23 23:21 UTC | 09 Oct 23 23:21 UTC |
	|         | m03 --alsologtostderr                                                                   |                             |         |         |                     |                     |
	| node    | list -p multinode-717678                                                                | multinode-717678            | jenkins | v1.31.2 | 09 Oct 23 23:21 UTC |                     |
	| stop    | -p multinode-717678                                                                     | multinode-717678            | jenkins | v1.31.2 | 09 Oct 23 23:21 UTC | 09 Oct 23 23:22 UTC |
	| start   | -p multinode-717678                                                                     | multinode-717678            | jenkins | v1.31.2 | 09 Oct 23 23:22 UTC | 09 Oct 23 23:23 UTC |
	|         | --wait=true -v=8                                                                        |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                             |         |         |                     |                     |
	| node    | list -p multinode-717678                                                                | multinode-717678            | jenkins | v1.31.2 | 09 Oct 23 23:23 UTC |                     |
	| node    | multinode-717678 node delete                                                            | multinode-717678            | jenkins | v1.31.2 | 09 Oct 23 23:23 UTC | 09 Oct 23 23:23 UTC |
	|         | m03                                                                                     |                             |         |         |                     |                     |
	| stop    | multinode-717678 stop                                                                   | multinode-717678            | jenkins | v1.31.2 | 09 Oct 23 23:23 UTC | 09 Oct 23 23:24 UTC |
	| start   | -p multinode-717678                                                                     | multinode-717678            | jenkins | v1.31.2 | 09 Oct 23 23:24 UTC | 09 Oct 23 23:25 UTC |
	|         | --wait=true -v=8                                                                        |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                             |         |         |                     |                     |
	|         | --driver=docker                                                                         |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                             |         |         |                     |                     |
	| node    | list -p multinode-717678                                                                | multinode-717678            | jenkins | v1.31.2 | 09 Oct 23 23:25 UTC |                     |
	| start   | -p multinode-717678-m02                                                                 | multinode-717678-m02        | jenkins | v1.31.2 | 09 Oct 23 23:25 UTC |                     |
	|         | --driver=docker                                                                         |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                             |         |         |                     |                     |
	| start   | -p multinode-717678-m03                                                                 | multinode-717678-m03        | jenkins | v1.31.2 | 09 Oct 23 23:25 UTC | 09 Oct 23 23:26 UTC |
	|         | --driver=docker                                                                         |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                             |         |         |                     |                     |
	| node    | add -p multinode-717678                                                                 | multinode-717678            | jenkins | v1.31.2 | 09 Oct 23 23:26 UTC |                     |
	| delete  | -p multinode-717678-m03                                                                 | multinode-717678-m03        | jenkins | v1.31.2 | 09 Oct 23 23:26 UTC | 09 Oct 23 23:26 UTC |
	| delete  | -p multinode-717678                                                                     | multinode-717678            | jenkins | v1.31.2 | 09 Oct 23 23:26 UTC | 09 Oct 23 23:26 UTC |
	| start   | -p test-preload-604354                                                                  | test-preload-604354         | jenkins | v1.31.2 | 09 Oct 23 23:26 UTC | 09 Oct 23 23:27 UTC |
	|         | --memory=2200                                                                           |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                             |         |         |                     |                     |
	|         | --wait=true --preload=false                                                             |                             |         |         |                     |                     |
	|         | --driver=docker                                                                         |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                             |         |         |                     |                     |
	| image   | test-preload-604354 image pull                                                          | test-preload-604354         | jenkins | v1.31.2 | 09 Oct 23 23:27 UTC | 09 Oct 23 23:27 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                             |         |         |                     |                     |
	| stop    | -p test-preload-604354                                                                  | test-preload-604354         | jenkins | v1.31.2 | 09 Oct 23 23:27 UTC | 09 Oct 23 23:28 UTC |
	| start   | -p test-preload-604354                                                                  | test-preload-604354         | jenkins | v1.31.2 | 09 Oct 23 23:28 UTC | 09 Oct 23 23:29 UTC |
	|         | --memory=2200                                                                           |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                             |         |         |                     |                     |
	|         | --wait=true --driver=docker                                                             |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                             |         |         |                     |                     |
	| image   | test-preload-604354 image list                                                          | test-preload-604354         | jenkins | v1.31.2 | 09 Oct 23 23:29 UTC | 09 Oct 23 23:29 UTC |
	| delete  | -p test-preload-604354                                                                  | test-preload-604354         | jenkins | v1.31.2 | 09 Oct 23 23:29 UTC | 09 Oct 23 23:29 UTC |
	| start   | -p scheduled-stop-406285                                                                | scheduled-stop-406285       | jenkins | v1.31.2 | 09 Oct 23 23:29 UTC | 09 Oct 23 23:30 UTC |
	|         | --memory=2048 --driver=docker                                                           |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-406285                                                                | scheduled-stop-406285       | jenkins | v1.31.2 | 09 Oct 23 23:30 UTC |                     |
	|         | --schedule 5m                                                                           |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-406285                                                                | scheduled-stop-406285       | jenkins | v1.31.2 | 09 Oct 23 23:30 UTC |                     |
	|         | --schedule 5m                                                                           |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-406285                                                                | scheduled-stop-406285       | jenkins | v1.31.2 | 09 Oct 23 23:30 UTC |                     |
	|         | --schedule 5m                                                                           |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-406285                                                                | scheduled-stop-406285       | jenkins | v1.31.2 | 09 Oct 23 23:30 UTC |                     |
	|         | --schedule 15s                                                                          |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-406285                                                                | scheduled-stop-406285       | jenkins | v1.31.2 | 09 Oct 23 23:30 UTC |                     |
	|         | --schedule 15s                                                                          |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-406285                                                                | scheduled-stop-406285       | jenkins | v1.31.2 | 09 Oct 23 23:30 UTC |                     |
	|         | --schedule 15s                                                                          |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-406285                                                                | scheduled-stop-406285       | jenkins | v1.31.2 | 09 Oct 23 23:30 UTC | 09 Oct 23 23:30 UTC |
	|         | --cancel-scheduled                                                                      |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-406285                                                                | scheduled-stop-406285       | jenkins | v1.31.2 | 09 Oct 23 23:30 UTC |                     |
	|         | --schedule 15s                                                                          |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-406285                                                                | scheduled-stop-406285       | jenkins | v1.31.2 | 09 Oct 23 23:30 UTC |                     |
	|         | --schedule 15s                                                                          |                             |         |         |                     |                     |
	| stop    | -p scheduled-stop-406285                                                                | scheduled-stop-406285       | jenkins | v1.31.2 | 09 Oct 23 23:30 UTC | 09 Oct 23 23:30 UTC |
	|         | --schedule 15s                                                                          |                             |         |         |                     |                     |
	| delete  | -p scheduled-stop-406285                                                                | scheduled-stop-406285       | jenkins | v1.31.2 | 09 Oct 23 23:31 UTC | 09 Oct 23 23:31 UTC |
	| start   | -p insufficient-storage-049368                                                          | insufficient-storage-049368 | jenkins | v1.31.2 | 09 Oct 23 23:31 UTC |                     |
	|         | --memory=2048 --output=json                                                             |                             |         |         |                     |                     |
	|         | --wait=true --driver=docker                                                             |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                             |         |         |                     |                     |
	| delete  | -p insufficient-storage-049368                                                          | insufficient-storage-049368 | jenkins | v1.31.2 | 09 Oct 23 23:31 UTC | 09 Oct 23 23:31 UTC |
	| start   | -p NoKubernetes-349860                                                                  | NoKubernetes-349860         | jenkins | v1.31.2 | 09 Oct 23 23:31 UTC |                     |
	|         | --no-kubernetes                                                                         |                             |         |         |                     |                     |
	|         | --kubernetes-version=1.20                                                               |                             |         |         |                     |                     |
	|         | --driver=docker                                                                         |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-349860                                                                  | NoKubernetes-349860         | jenkins | v1.31.2 | 09 Oct 23 23:31 UTC | 09 Oct 23 23:32 UTC |
	|         | --driver=docker                                                                         |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                             |         |         |                     |                     |
	| start   | -p NoKubernetes-349860                                                                  | NoKubernetes-349860         | jenkins | v1.31.2 | 09 Oct 23 23:32 UTC | 09 Oct 23 23:32 UTC |
	|         | --no-kubernetes                                                                         |                             |         |         |                     |                     |
	|         | --driver=docker                                                                         |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-349860                                                                  | NoKubernetes-349860         | jenkins | v1.31.2 | 09 Oct 23 23:32 UTC | 09 Oct 23 23:32 UTC |
	| start   | -p NoKubernetes-349860                                                                  | NoKubernetes-349860         | jenkins | v1.31.2 | 09 Oct 23 23:32 UTC | 09 Oct 23 23:32 UTC |
	|         | --no-kubernetes                                                                         |                             |         |         |                     |                     |
	|         | --driver=docker                                                                         |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-349860 sudo                                                             | NoKubernetes-349860         | jenkins | v1.31.2 | 09 Oct 23 23:32 UTC |                     |
	|         | systemctl is-active --quiet                                                             |                             |         |         |                     |                     |
	|         | service kubelet                                                                         |                             |         |         |                     |                     |
	| stop    | -p NoKubernetes-349860                                                                  | NoKubernetes-349860         | jenkins | v1.31.2 | 09 Oct 23 23:32 UTC | 09 Oct 23 23:32 UTC |
	| start   | -p NoKubernetes-349860                                                                  | NoKubernetes-349860         | jenkins | v1.31.2 | 09 Oct 23 23:32 UTC | 09 Oct 23 23:32 UTC |
	|         | --driver=docker                                                                         |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                             |         |         |                     |                     |
	| ssh     | -p NoKubernetes-349860 sudo                                                             | NoKubernetes-349860         | jenkins | v1.31.2 | 09 Oct 23 23:32 UTC |                     |
	|         | systemctl is-active --quiet                                                             |                             |         |         |                     |                     |
	|         | service kubelet                                                                         |                             |         |         |                     |                     |
	| delete  | -p NoKubernetes-349860                                                                  | NoKubernetes-349860         | jenkins | v1.31.2 | 09 Oct 23 23:32 UTC | 09 Oct 23 23:32 UTC |
	| start   | -p kubernetes-upgrade-637449                                                            | kubernetes-upgrade-637449   | jenkins | v1.31.2 | 09 Oct 23 23:32 UTC | 09 Oct 23 23:34 UTC |
	|         | --memory=2200                                                                           |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                            |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker                                                                    |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                             |         |         |                     |                     |
	| start   | -p missing-upgrade-201150                                                               | missing-upgrade-201150      | jenkins | v1.31.2 | 09 Oct 23 23:33 UTC |                     |
	|         | --memory=2200                                                                           |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker                                                                    |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                             |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-637449                                                            | kubernetes-upgrade-637449   | jenkins | v1.31.2 | 09 Oct 23 23:34 UTC | 09 Oct 23 23:34 UTC |
	| start   | -p kubernetes-upgrade-637449                                                            | kubernetes-upgrade-637449   | jenkins | v1.31.2 | 09 Oct 23 23:34 UTC | 09 Oct 23 23:38 UTC |
	|         | --memory=2200                                                                           |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                                                            |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker                                                                    |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                             |         |         |                     |                     |
	| delete  | -p missing-upgrade-201150                                                               | missing-upgrade-201150      | jenkins | v1.31.2 | 09 Oct 23 23:34 UTC | 09 Oct 23 23:34 UTC |
	| start   | -p kubernetes-upgrade-637449                                                            | kubernetes-upgrade-637449   | jenkins | v1.31.2 | 09 Oct 23 23:38 UTC |                     |
	|         | --memory=2200                                                                           |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                            |                             |         |         |                     |                     |
	|         | --driver=docker                                                                         |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                             |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-637449                                                            | kubernetes-upgrade-637449   | jenkins | v1.31.2 | 09 Oct 23 23:38 UTC | 09 Oct 23 23:39 UTC |
	|         | --memory=2200                                                                           |                             |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                                                            |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker                                                                    |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                             |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-637449                                                            | kubernetes-upgrade-637449   | jenkins | v1.31.2 | 09 Oct 23 23:39 UTC | 09 Oct 23 23:39 UTC |
	| start   | -p running-upgrade-022173                                                               | running-upgrade-022173      | jenkins | v1.31.2 | 09 Oct 23 23:40 UTC |                     |
	|         | --memory=2200                                                                           |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                             |         |         |                     |                     |
	|         | -v=1 --driver=docker                                                                    |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                             |         |         |                     |                     |
	| delete  | -p running-upgrade-022173                                                               | running-upgrade-022173      | jenkins | v1.31.2 | 09 Oct 23 23:40 UTC | 09 Oct 23 23:40 UTC |
	| start   | -p pause-078272 --memory=2048                                                           | pause-078272                | jenkins | v1.31.2 | 09 Oct 23 23:40 UTC |                     |
	|         | --install-addons=false                                                                  |                             |         |         |                     |                     |
	|         | --wait=all --driver=docker                                                              |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                             |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/09 23:40:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 23:40:34.617205 1672740 out.go:296] Setting OutFile to fd 1 ...
	I1009 23:40:34.617383 1672740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:40:34.617387 1672740 out.go:309] Setting ErrFile to fd 2...
	I1009 23:40:34.617392 1672740 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:40:34.617626 1672740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-1537865/.minikube/bin
	I1009 23:40:34.618038 1672740 out.go:303] Setting JSON to false
	I1009 23:40:34.618893 1672740 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":26578,"bootTime":1696868257,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1009 23:40:34.618952 1672740 start.go:138] virtualization:  
	I1009 23:40:34.623133 1672740 out.go:177] * [pause-078272] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1009 23:40:34.625016 1672740 out.go:177]   - MINIKUBE_LOCATION=17375
	I1009 23:40:34.625113 1672740 notify.go:220] Checking for updates...
	I1009 23:40:34.627131 1672740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 23:40:34.630014 1672740 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig
	I1009 23:40:34.632040 1672740 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	I1009 23:40:34.633779 1672740 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 23:40:34.635853 1672740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 23:40:34.638738 1672740 config.go:182] Loaded profile config "stopped-upgrade-991232": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1009 23:40:34.638844 1672740 driver.go:378] Setting default libvirt URI to qemu:///system
	I1009 23:40:34.666472 1672740 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1009 23:40:34.666569 1672740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 23:40:34.752221 1672740 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-09 23:40:34.741349723 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1009 23:40:34.752321 1672740 docker.go:295] overlay module found
	I1009 23:40:34.754550 1672740 out.go:177] * Using the docker driver based on user configuration
	I1009 23:40:34.756370 1672740 start.go:298] selected driver: docker
	I1009 23:40:34.756379 1672740 start.go:902] validating driver "docker" against <nil>
	I1009 23:40:34.756391 1672740 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 23:40:34.757058 1672740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 23:40:34.824902 1672740 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-10-09 23:40:34.815385985 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1009 23:40:34.825049 1672740 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1009 23:40:34.825277 1672740 start_flags.go:926] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 23:40:34.827290 1672740 out.go:177] * Using Docker driver with root privileges
	I1009 23:40:34.829029 1672740 cni.go:84] Creating CNI manager for ""
	I1009 23:40:34.829041 1672740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 23:40:34.829051 1672740 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 23:40:34.829063 1672740 start_flags.go:323] config:
	{Name:pause-078272 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-078272 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 23:40:34.831314 1672740 out.go:177] * Starting control plane node pause-078272 in cluster pause-078272
	I1009 23:40:34.833296 1672740 cache.go:122] Beginning downloading kic base image for docker with crio
	I1009 23:40:34.835383 1672740 out.go:177] * Pulling base image ...
	I1009 23:40:34.837439 1672740 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1009 23:40:34.837488 1672740 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1009 23:40:34.837496 1672740 cache.go:57] Caching tarball of preloaded images
	I1009 23:40:34.837531 1672740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1009 23:40:34.837594 1672740 preload.go:174] Found /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1009 23:40:34.837603 1672740 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I1009 23:40:34.837711 1672740 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/config.json ...
	I1009 23:40:34.837729 1672740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/config.json: {Name:mkaa89cc687f575df0820d1224df3dc00391c9e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:40:34.855741 1672740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon, skipping pull
	I1009 23:40:34.855755 1672740 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in daemon, skipping load
	I1009 23:40:34.855775 1672740 cache.go:195] Successfully downloaded all kic artifacts
	I1009 23:40:34.855808 1672740 start.go:365] acquiring machines lock for pause-078272: {Name:mkd326fc10662f36de421c99def136a6ebd3bfb1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 23:40:34.855927 1672740 start.go:369] acquired machines lock for "pause-078272" in 103.491µs
	I1009 23:40:34.855953 1672740 start.go:93] Provisioning new machine with config: &{Name:pause-078272 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-078272 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 23:40:34.856035 1672740 start.go:125] createHost starting for "" (driver="docker")
	I1009 23:40:34.858631 1672740 out.go:204] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I1009 23:40:34.858893 1672740 start.go:159] libmachine.API.Create for "pause-078272" (driver="docker")
	I1009 23:40:34.858921 1672740 client.go:168] LocalClient.Create starting
	I1009 23:40:34.858992 1672740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem
	I1009 23:40:34.859030 1672740 main.go:141] libmachine: Decoding PEM data...
	I1009 23:40:34.859046 1672740 main.go:141] libmachine: Parsing certificate...
	I1009 23:40:34.859104 1672740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem
	I1009 23:40:34.859143 1672740 main.go:141] libmachine: Decoding PEM data...
	I1009 23:40:34.859153 1672740 main.go:141] libmachine: Parsing certificate...
	I1009 23:40:34.859523 1672740 cli_runner.go:164] Run: docker network inspect pause-078272 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 23:40:34.876560 1672740 cli_runner.go:211] docker network inspect pause-078272 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 23:40:34.876629 1672740 network_create.go:281] running [docker network inspect pause-078272] to gather additional debugging logs...
	I1009 23:40:34.876643 1672740 cli_runner.go:164] Run: docker network inspect pause-078272
	W1009 23:40:34.894080 1672740 cli_runner.go:211] docker network inspect pause-078272 returned with exit code 1
	I1009 23:40:34.894100 1672740 network_create.go:284] error running [docker network inspect pause-078272]: docker network inspect pause-078272: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network pause-078272 not found
	I1009 23:40:34.894111 1672740 network_create.go:286] output of [docker network inspect pause-078272]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network pause-078272 not found
	
	** /stderr **
	I1009 23:40:34.894239 1672740 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 23:40:34.912839 1672740 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bbbaf27e04e4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:09:6a:d9:0c} reservation:<nil>}
	I1009 23:40:34.913188 1672740 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7fa9be4abd6f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:46:ca:1e:75} reservation:<nil>}
	I1009 23:40:34.913669 1672740 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40026426f0}
	I1009 23:40:34.913685 1672740 network_create.go:124] attempt to create docker network pause-078272 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1009 23:40:34.913745 1672740 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=pause-078272 pause-078272
	I1009 23:40:34.985040 1672740 network_create.go:108] docker network pause-078272 192.168.67.0/24 created
	I1009 23:40:34.985061 1672740 kic.go:118] calculated static IP "192.168.67.2" for the "pause-078272" container
	I1009 23:40:34.985148 1672740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 23:40:35.013892 1672740 cli_runner.go:164] Run: docker volume create pause-078272 --label name.minikube.sigs.k8s.io=pause-078272 --label created_by.minikube.sigs.k8s.io=true
	I1009 23:40:35.034525 1672740 oci.go:103] Successfully created a docker volume pause-078272
	I1009 23:40:35.034617 1672740 cli_runner.go:164] Run: docker run --rm --name pause-078272-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=pause-078272 --entrypoint /usr/bin/test -v pause-078272:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -d /var/lib
	I1009 23:40:35.687294 1672740 oci.go:107] Successfully prepared a docker volume pause-078272
	I1009 23:40:35.687325 1672740 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1009 23:40:35.687342 1672740 kic.go:191] Starting extracting preloaded images to volume ...
	I1009 23:40:35.687450 1672740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v pause-078272:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 23:40:40.135283 1672740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v pause-078272:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae -I lz4 -xf /preloaded.tar -C /extractDir: (4.447792929s)
	I1009 23:40:40.135314 1672740 kic.go:200] duration metric: took 4.447959 seconds to extract preloaded images to volume
	W1009 23:40:40.135467 1672740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 23:40:40.135583 1672740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 23:40:40.207394 1672740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname pause-078272 --name pause-078272 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=pause-078272 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=pause-078272 --network pause-078272 --ip 192.168.67.2 --volume pause-078272:/var --security-opt apparmor=unconfined --memory=2048mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae
	I1009 23:40:40.580560 1672740 cli_runner.go:164] Run: docker container inspect pause-078272 --format={{.State.Running}}
	I1009 23:40:40.608640 1672740 cli_runner.go:164] Run: docker container inspect pause-078272 --format={{.State.Status}}
	I1009 23:40:40.629576 1672740 cli_runner.go:164] Run: docker exec pause-078272 stat /var/lib/dpkg/alternatives/iptables
	I1009 23:40:40.721539 1672740 oci.go:144] the created container "pause-078272" has a running status.
	I1009 23:40:40.721577 1672740 kic.go:222] Creating ssh key for kic: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/pause-078272/id_rsa...
	I1009 23:40:40.902867 1672740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/pause-078272/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 23:40:40.932747 1672740 cli_runner.go:164] Run: docker container inspect pause-078272 --format={{.State.Status}}
	I1009 23:40:40.952766 1672740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 23:40:40.952778 1672740 kic_runner.go:114] Args: [docker exec --privileged pause-078272 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 23:40:41.031146 1672740 cli_runner.go:164] Run: docker container inspect pause-078272 --format={{.State.Status}}
	I1009 23:40:41.050729 1672740 machine.go:88] provisioning docker machine ...
	I1009 23:40:41.050751 1672740 ubuntu.go:169] provisioning hostname "pause-078272"
	I1009 23:40:41.050827 1672740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-078272
	I1009 23:40:41.074111 1672740 main.go:141] libmachine: Using SSH client type: native
	I1009 23:40:41.075723 1672740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34542 <nil> <nil>}
	I1009 23:40:41.075738 1672740 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-078272 && echo "pause-078272" | sudo tee /etc/hostname
	I1009 23:40:41.077521 1672740 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1009 23:40:44.232863 1672740 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-078272
	
	I1009 23:40:44.232946 1672740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-078272
	I1009 23:40:44.251262 1672740 main.go:141] libmachine: Using SSH client type: native
	I1009 23:40:44.251674 1672740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34542 <nil> <nil>}
	I1009 23:40:44.251689 1672740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-078272' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-078272/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-078272' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 23:40:44.384325 1672740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 23:40:44.384344 1672740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17375-1537865/.minikube CaCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17375-1537865/.minikube}
	I1009 23:40:44.384363 1672740 ubuntu.go:177] setting up certificates
	I1009 23:40:44.384370 1672740 provision.go:83] configureAuth start
	I1009 23:40:44.384428 1672740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-078272
	I1009 23:40:44.402736 1672740 provision.go:138] copyHostCerts
	I1009 23:40:44.402797 1672740 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem, removing ...
	I1009 23:40:44.402804 1672740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem
	I1009 23:40:44.402892 1672740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.pem (1078 bytes)
	I1009 23:40:44.402991 1672740 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem, removing ...
	I1009 23:40:44.402995 1672740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem
	I1009 23:40:44.403021 1672740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/cert.pem (1123 bytes)
	I1009 23:40:44.403080 1672740 exec_runner.go:144] found /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem, removing ...
	I1009 23:40:44.403083 1672740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem
	I1009 23:40:44.403106 1672740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17375-1537865/.minikube/key.pem (1679 bytes)
	I1009 23:40:44.403301 1672740 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem org=jenkins.pause-078272 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube pause-078272]
	I1009 23:40:44.688408 1672740 provision.go:172] copyRemoteCerts
	I1009 23:40:44.688481 1672740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 23:40:44.688547 1672740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-078272
	I1009 23:40:44.706478 1672740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/pause-078272/id_rsa Username:docker}
	I1009 23:40:44.802235 1672740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1009 23:40:44.832377 1672740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 23:40:44.860637 1672740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1009 23:40:44.889053 1672740 provision.go:86] duration metric: configureAuth took 504.668867ms
	I1009 23:40:44.889070 1672740 ubuntu.go:193] setting minikube options for container-runtime
	I1009 23:40:44.889267 1672740 config.go:182] Loaded profile config "pause-078272": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1009 23:40:44.889367 1672740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-078272
	I1009 23:40:44.908475 1672740 main.go:141] libmachine: Using SSH client type: native
	I1009 23:40:44.908898 1672740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3aef90] 0x3b1700 <nil>  [] 0s} 127.0.0.1 34542 <nil> <nil>}
	I1009 23:40:44.908911 1672740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 23:40:45.363842 1672740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 23:40:45.363874 1672740 machine.go:91] provisioned docker machine in 4.313117558s
	I1009 23:40:45.363884 1672740 client.go:171] LocalClient.Create took 10.504958815s
	I1009 23:40:45.363897 1672740 start.go:167] duration metric: libmachine.API.Create for "pause-078272" took 10.505004353s
	I1009 23:40:45.363905 1672740 start.go:300] post-start starting for "pause-078272" (driver="docker")
	I1009 23:40:45.363914 1672740 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 23:40:45.363994 1672740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 23:40:45.364040 1672740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-078272
	I1009 23:40:45.395320 1672740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/pause-078272/id_rsa Username:docker}
	I1009 23:40:45.499915 1672740 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 23:40:45.504549 1672740 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 23:40:45.504587 1672740 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1009 23:40:45.504597 1672740 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1009 23:40:45.504610 1672740 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1009 23:40:45.504624 1672740 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-1537865/.minikube/addons for local assets ...
	I1009 23:40:45.504706 1672740 filesync.go:126] Scanning /home/jenkins/minikube-integration/17375-1537865/.minikube/files for local assets ...
	I1009 23:40:45.504803 1672740 filesync.go:149] local asset: /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem -> 15432152.pem in /etc/ssl/certs
	I1009 23:40:45.504920 1672740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1009 23:40:45.517023 1672740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem --> /etc/ssl/certs/15432152.pem (1708 bytes)
	I1009 23:40:45.549207 1672740 start.go:303] post-start completed in 185.287994ms
	I1009 23:40:45.549611 1672740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-078272
	I1009 23:40:45.568837 1672740 profile.go:148] Saving config to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/config.json ...
	I1009 23:40:45.569131 1672740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 23:40:45.569170 1672740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-078272
	I1009 23:40:45.588224 1672740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/pause-078272/id_rsa Username:docker}
	I1009 23:40:45.681957 1672740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 23:40:45.687988 1672740 start.go:128] duration metric: createHost completed in 10.831939843s
	I1009 23:40:45.688002 1672740 start.go:83] releasing machines lock for "pause-078272", held for 10.832068573s
	I1009 23:40:45.688072 1672740 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-078272
	I1009 23:40:45.705476 1672740 ssh_runner.go:195] Run: cat /version.json
	I1009 23:40:45.705513 1672740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 23:40:45.705519 1672740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-078272
	I1009 23:40:45.705581 1672740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-078272
	I1009 23:40:45.731110 1672740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/pause-078272/id_rsa Username:docker}
	I1009 23:40:45.733870 1672740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34542 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/pause-078272/id_rsa Username:docker}
	I1009 23:40:45.823415 1672740 ssh_runner.go:195] Run: systemctl --version
	I1009 23:40:45.977757 1672740 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 23:40:46.140022 1672740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 23:40:46.145580 1672740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 23:40:46.171696 1672740 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1009 23:40:46.171762 1672740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 23:40:46.213288 1672740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1009 23:40:46.213300 1672740 start.go:472] detecting cgroup driver to use...
	I1009 23:40:46.213330 1672740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1009 23:40:46.213374 1672740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 23:40:46.231099 1672740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 23:40:46.244391 1672740 docker.go:198] disabling cri-docker service (if available) ...
	I1009 23:40:46.244445 1672740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 23:40:46.260388 1672740 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 23:40:46.277401 1672740 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 23:40:46.378354 1672740 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 23:40:46.488159 1672740 docker.go:214] disabling docker service ...
	I1009 23:40:46.488213 1672740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 23:40:46.511171 1672740 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 23:40:46.525018 1672740 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 23:40:46.640879 1672740 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 23:40:46.746470 1672740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 23:40:46.759854 1672740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 23:40:46.779952 1672740 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1009 23:40:46.780007 1672740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 23:40:46.791676 1672740 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 23:40:46.791754 1672740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 23:40:46.804308 1672740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 23:40:46.816012 1672740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 23:40:46.827830 1672740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 23:40:46.839388 1672740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 23:40:46.850242 1672740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 23:40:46.860523 1672740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 23:40:46.958068 1672740 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 23:40:47.091441 1672740 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 23:40:47.091514 1672740 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 23:40:47.096236 1672740 start.go:540] Will wait 60s for crictl version
	I1009 23:40:47.096288 1672740 ssh_runner.go:195] Run: which crictl
	I1009 23:40:47.100658 1672740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 23:40:47.148579 1672740 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1009 23:40:47.148656 1672740 ssh_runner.go:195] Run: crio --version
	I1009 23:40:47.202228 1672740 ssh_runner.go:195] Run: crio --version
	I1009 23:40:47.248507 1672740 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I1009 23:40:47.251072 1672740 cli_runner.go:164] Run: docker network inspect pause-078272 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 23:40:47.268586 1672740 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I1009 23:40:47.273196 1672740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 23:40:47.286704 1672740 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1009 23:40:47.286772 1672740 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 23:40:47.351042 1672740 crio.go:496] all images are preloaded for cri-o runtime.
	I1009 23:40:47.351063 1672740 crio.go:415] Images already preloaded, skipping extraction
	I1009 23:40:47.351144 1672740 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 23:40:47.390520 1672740 crio.go:496] all images are preloaded for cri-o runtime.
	I1009 23:40:47.390531 1672740 cache_images.go:84] Images are preloaded, skipping loading
	I1009 23:40:47.390607 1672740 ssh_runner.go:195] Run: crio config
	I1009 23:40:47.455877 1672740 cni.go:84] Creating CNI manager for ""
	I1009 23:40:47.455888 1672740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 23:40:47.455908 1672740 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1009 23:40:47.455926 1672740 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-078272 NodeName:pause-078272 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 23:40:47.456058 1672740 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-078272"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 23:40:47.456127 1672740 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-078272 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:pause-078272 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1009 23:40:47.456193 1672740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1009 23:40:47.467373 1672740 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 23:40:47.467451 1672740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 23:40:47.477880 1672740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I1009 23:40:47.498919 1672740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 23:40:47.520144 1672740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I1009 23:40:47.541582 1672740 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1009 23:40:47.546079 1672740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 23:40:47.559848 1672740 certs.go:56] Setting up /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272 for IP: 192.168.67.2
	I1009 23:40:47.559869 1672740 certs.go:190] acquiring lock for shared ca certs: {Name:mk430c21a56d31b4f15423923c65864a3e3a3c9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:40:47.560031 1672740 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.key
	I1009 23:40:47.560072 1672740 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.key
	I1009 23:40:47.560116 1672740 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/client.key
	I1009 23:40:47.560125 1672740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/client.crt with IP's: []
	I1009 23:40:47.999353 1672740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/client.crt ...
	I1009 23:40:47.999367 1672740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/client.crt: {Name:mk261e0fc11e94baffd01fe3b17295ebc4d200c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:40:47.999573 1672740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/client.key ...
	I1009 23:40:47.999587 1672740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/client.key: {Name:mkd53fa5174b8b28ff26456f572639f64cf38cd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:40:47.999679 1672740 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/apiserver.key.c7fa3a9e
	I1009 23:40:47.999690 1672740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1009 23:40:48.381639 1672740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/apiserver.crt.c7fa3a9e ...
	I1009 23:40:48.381657 1672740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/apiserver.crt.c7fa3a9e: {Name:mk32855bdf5fda7bc47f3cb37fe55498881620bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:40:48.382516 1672740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/apiserver.key.c7fa3a9e ...
	I1009 23:40:48.382530 1672740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/apiserver.key.c7fa3a9e: {Name:mk11b010de4a13f6f1f1f59fe0a7152e8a154590 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:40:48.382642 1672740 certs.go:337] copying /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/apiserver.crt
	I1009 23:40:48.382724 1672740 certs.go:341] copying /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/apiserver.key
	I1009 23:40:48.382774 1672740 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/proxy-client.key
	I1009 23:40:48.382784 1672740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/proxy-client.crt with IP's: []
	I1009 23:40:48.605442 1672740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/proxy-client.crt ...
	I1009 23:40:48.605458 1672740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/proxy-client.crt: {Name:mk6807416acdaadcbda11cb2f2336d6212289d50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:40:48.605689 1672740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/proxy-client.key ...
	I1009 23:40:48.605696 1672740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/proxy-client.key: {Name:mk81a6c527b7b25716d21666ff14a533fd03ee5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:40:48.606683 1672740 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/1543215.pem (1338 bytes)
	W1009 23:40:48.606724 1672740 certs.go:433] ignoring /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/1543215_empty.pem, impossibly tiny 0 bytes
	I1009 23:40:48.606733 1672740 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca-key.pem (1675 bytes)
	I1009 23:40:48.606765 1672740 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/ca.pem (1078 bytes)
	I1009 23:40:48.606790 1672740 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/cert.pem (1123 bytes)
	I1009 23:40:48.606813 1672740 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/certs/key.pem (1679 bytes)
	I1009 23:40:48.606862 1672740 certs.go:437] found cert: /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem (1708 bytes)
	I1009 23:40:48.607591 1672740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1009 23:40:48.636733 1672740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1009 23:40:48.665784 1672740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 23:40:48.694220 1672740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 23:40:48.722723 1672740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 23:40:48.751962 1672740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1009 23:40:48.781592 1672740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 23:40:48.810640 1672740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 23:40:48.839248 1672740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/certs/1543215.pem --> /usr/share/ca-certificates/1543215.pem (1338 bytes)
	I1009 23:40:48.867666 1672740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/ssl/certs/15432152.pem --> /usr/share/ca-certificates/15432152.pem (1708 bytes)
	I1009 23:40:48.896030 1672740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 23:40:48.924199 1672740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 23:40:48.945258 1672740 ssh_runner.go:195] Run: openssl version
	I1009 23:40:48.954547 1672740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15432152.pem && ln -fs /usr/share/ca-certificates/15432152.pem /etc/ssl/certs/15432152.pem"
	I1009 23:40:48.968835 1672740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15432152.pem
	I1009 23:40:48.973665 1672740 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  9 23:03 /usr/share/ca-certificates/15432152.pem
	I1009 23:40:48.973729 1672740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15432152.pem
	I1009 23:40:48.982706 1672740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15432152.pem /etc/ssl/certs/3ec20f2e.0"
	I1009 23:40:48.994460 1672740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 23:40:49.007433 1672740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:40:49.012494 1672740 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  9 22:55 /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:40:49.012562 1672740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 23:40:49.021370 1672740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 23:40:49.032936 1672740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1543215.pem && ln -fs /usr/share/ca-certificates/1543215.pem /etc/ssl/certs/1543215.pem"
	I1009 23:40:49.044577 1672740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1543215.pem
	I1009 23:40:49.050497 1672740 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  9 23:03 /usr/share/ca-certificates/1543215.pem
	I1009 23:40:49.050569 1672740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1543215.pem
	I1009 23:40:49.060459 1672740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1543215.pem /etc/ssl/certs/51391683.0"
	I1009 23:40:49.072837 1672740 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1009 23:40:49.077321 1672740 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1009 23:40:49.077364 1672740 kubeadm.go:404] StartCluster: {Name:pause-078272 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:pause-078272 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 23:40:49.077442 1672740 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 23:40:49.077505 1672740 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 23:40:49.120716 1672740 cri.go:89] found id: ""
	I1009 23:40:49.120778 1672740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 23:40:49.133987 1672740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 23:40:49.146582 1672740 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1009 23:40:49.146638 1672740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 23:40:49.157549 1672740 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 23:40:49.157580 1672740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 23:40:49.217219 1672740 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I1009 23:40:49.217559 1672740 kubeadm.go:322] [preflight] Running pre-flight checks
	I1009 23:40:49.262003 1672740 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1009 23:40:49.262061 1672740 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1047-aws
	I1009 23:40:49.262093 1672740 kubeadm.go:322] OS: Linux
	I1009 23:40:49.262135 1672740 kubeadm.go:322] CGROUPS_CPU: enabled
	I1009 23:40:49.262180 1672740 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1009 23:40:49.262223 1672740 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1009 23:40:49.262267 1672740 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1009 23:40:49.262311 1672740 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1009 23:40:49.262356 1672740 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1009 23:40:49.262398 1672740 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1009 23:40:49.262442 1672740 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1009 23:40:49.262487 1672740 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1009 23:40:49.348942 1672740 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 23:40:49.349037 1672740 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 23:40:49.349122 1672740 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1009 23:40:49.610381 1672740 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 23:40:49.613805 1672740 out.go:204]   - Generating certificates and keys ...
	I1009 23:40:49.613929 1672740 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1009 23:40:49.613994 1672740 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1009 23:40:50.105163 1672740 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 23:40:50.689476 1672740 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1009 23:40:51.202387 1672740 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1009 23:40:51.935576 1672740 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1009 23:40:52.332751 1672740 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1009 23:40:52.332866 1672740 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost pause-078272] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1009 23:40:53.555155 1672740 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1009 23:40:53.555774 1672740 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost pause-078272] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1009 23:40:53.755823 1672740 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 23:40:54.149572 1672740 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 23:40:54.385215 1672740 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1009 23:40:54.385474 1672740 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 23:40:54.861561 1672740 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 23:40:55.176087 1672740 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 23:40:55.710324 1672740 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 23:40:56.399588 1672740 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 23:40:56.400210 1672740 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 23:40:56.402900 1672740 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 23:40:56.405537 1672740 out.go:204]   - Booting up control plane ...
	I1009 23:40:56.405646 1672740 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 23:40:56.406319 1672740 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 23:40:56.407528 1672740 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 23:40:56.419223 1672740 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 23:40:56.419315 1672740 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 23:40:56.419351 1672740 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1009 23:40:56.523308 1672740 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1009 23:41:03.526242 1672740 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.003364 seconds
	I1009 23:41:03.526358 1672740 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 23:41:03.542207 1672740 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 23:41:04.071523 1672740 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 23:41:04.071703 1672740 kubeadm.go:322] [mark-control-plane] Marking the node pause-078272 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 23:41:04.596161 1672740 kubeadm.go:322] [bootstrap-token] Using token: b0zq4t.uum2h2l5ie13s8bz
	I1009 23:41:04.598741 1672740 out.go:204]   - Configuring RBAC rules ...
	I1009 23:41:04.598862 1672740 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 23:41:04.604653 1672740 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 23:41:04.615347 1672740 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 23:41:04.621093 1672740 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 23:41:04.625265 1672740 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 23:41:04.629290 1672740 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 23:41:04.645554 1672740 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 23:41:04.911086 1672740 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1009 23:41:05.040917 1672740 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1009 23:41:05.042230 1672740 kubeadm.go:322] 
	I1009 23:41:05.042294 1672740 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1009 23:41:05.042300 1672740 kubeadm.go:322] 
	I1009 23:41:05.042370 1672740 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1009 23:41:05.042374 1672740 kubeadm.go:322] 
	I1009 23:41:05.042398 1672740 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1009 23:41:05.042452 1672740 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 23:41:05.042498 1672740 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 23:41:05.042502 1672740 kubeadm.go:322] 
	I1009 23:41:05.042552 1672740 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1009 23:41:05.042556 1672740 kubeadm.go:322] 
	I1009 23:41:05.042600 1672740 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 23:41:05.042604 1672740 kubeadm.go:322] 
	I1009 23:41:05.042651 1672740 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1009 23:41:05.042721 1672740 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 23:41:05.042789 1672740 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 23:41:05.042794 1672740 kubeadm.go:322] 
	I1009 23:41:05.042871 1672740 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 23:41:05.042942 1672740 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1009 23:41:05.042947 1672740 kubeadm.go:322] 
	I1009 23:41:05.043024 1672740 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token b0zq4t.uum2h2l5ie13s8bz \
	I1009 23:41:05.043159 1672740 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e2aebf53348f507bad0adab8a765b229b70810954e22f1e7a919941009267e3f \
	I1009 23:41:05.043179 1672740 kubeadm.go:322] 	--control-plane 
	I1009 23:41:05.043183 1672740 kubeadm.go:322] 
	I1009 23:41:05.043282 1672740 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1009 23:41:05.043298 1672740 kubeadm.go:322] 
	I1009 23:41:05.043385 1672740 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token b0zq4t.uum2h2l5ie13s8bz \
	I1009 23:41:05.043490 1672740 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:e2aebf53348f507bad0adab8a765b229b70810954e22f1e7a919941009267e3f 
	I1009 23:41:05.047752 1672740 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1047-aws\n", err: exit status 1
	I1009 23:41:05.047857 1672740 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 23:41:05.047872 1672740 cni.go:84] Creating CNI manager for ""
	I1009 23:41:05.047879 1672740 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 23:41:05.050933 1672740 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1009 23:41:05.052940 1672740 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 23:41:05.061410 1672740 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I1009 23:41:05.061422 1672740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1009 23:41:05.118259 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 23:41:06.035608 1672740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 23:41:06.035728 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=1a5b93fd6ef42f6c2d46c92c79fcd158f262dc90 minikube.k8s.io/name=pause-078272 minikube.k8s.io/updated_at=2023_10_09T23_41_06_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:06.035731 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:06.211465 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:06.211523 1672740 ops.go:34] apiserver oom_adj: -16
	I1009 23:41:06.324743 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:06.921563 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:07.421044 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:07.921010 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:08.421021 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:08.921658 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:09.421009 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:09.921226 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:10.421446 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:10.921230 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:11.421146 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:11.921294 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:12.421721 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:12.921000 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:13.421671 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:13.921672 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:14.421793 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:14.921005 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:15.421528 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:15.921002 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:16.421572 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:16.921175 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:17.421650 1672740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 23:41:17.558473 1672740 kubeadm.go:1081] duration metric: took 11.522807s to wait for elevateKubeSystemPrivileges.
	I1009 23:41:17.558491 1672740 kubeadm.go:406] StartCluster complete in 28.481129783s
	I1009 23:41:17.558505 1672740 settings.go:142] acquiring lock: {Name:mkeeac28244e9503bae3d91ba3a5c4a3392545f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:41:17.558566 1672740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17375-1537865/kubeconfig
	I1009 23:41:17.559283 1672740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17375-1537865/kubeconfig: {Name:mk913f33f2148d9a5b250c16fc9df0a8782f9275 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 23:41:17.560877 1672740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 23:41:17.561163 1672740 config.go:182] Loaded profile config "pause-078272": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1009 23:41:17.612755 1672740 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-078272" context rescaled to 1 replicas
	I1009 23:41:17.612791 1672740 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 23:41:17.614826 1672740 out.go:177] * Verifying Kubernetes components...
	I1009 23:41:17.617067 1672740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 23:41:17.710743 1672740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 23:41:17.711672 1672740 node_ready.go:35] waiting up to 6m0s for node "pause-078272" to be "Ready" ...
	I1009 23:41:18.064514 1672740 start.go:926] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I1009 23:41:19.242398 1672740 node_ready.go:49] node "pause-078272" has status "Ready":"True"
	I1009 23:41:19.242409 1672740 node_ready.go:38] duration metric: took 1.530722494s waiting for node "pause-078272" to be "Ready" ...
	I1009 23:41:19.242418 1672740 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 23:41:19.274063 1672740 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-dvxbw" in "kube-system" namespace to be "Ready" ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p stopped-upgrade-991232"

                                                
                                                
-- /stdout --
version_upgrade_test.go:221: `minikube logs` after upgrade to HEAD from v1.17.0 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.16s)

                                                
                                    

Test pass (272/308)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.1
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
10 TestDownloadOnly/v1.28.2/json-events 11.78
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.1
16 TestDownloadOnly/DeleteAll 0.24
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.17
19 TestBinaryMirror 0.63
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.11
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
25 TestAddons/Setup 164.94
27 TestAddons/parallel/Registry 19.54
29 TestAddons/parallel/InspektorGadget 11.06
30 TestAddons/parallel/MetricsServer 5.96
33 TestAddons/parallel/CSI 48.71
34 TestAddons/parallel/Headlamp 12.57
35 TestAddons/parallel/CloudSpanner 5.63
36 TestAddons/parallel/LocalPath 52.1
37 TestAddons/parallel/NvidiaDevicePlugin 5.64
40 TestAddons/serial/GCPAuth/Namespaces 0.49
41 TestAddons/StoppedEnableDisable 12.48
42 TestCertOptions 37.22
43 TestCertExpiration 278.66
45 TestForceSystemdFlag 42.13
46 TestForceSystemdEnv 44.32
52 TestErrorSpam/setup 33.15
53 TestErrorSpam/start 0.94
54 TestErrorSpam/status 1.16
55 TestErrorSpam/pause 1.92
56 TestErrorSpam/unpause 2.09
57 TestErrorSpam/stop 1.51
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 76.4
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 42.32
64 TestFunctional/serial/KubeContext 0.06
65 TestFunctional/serial/KubectlGetPods 0.11
68 TestFunctional/serial/CacheCmd/cache/add_remote 4.15
69 TestFunctional/serial/CacheCmd/cache/add_local 1.22
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
71 TestFunctional/serial/CacheCmd/cache/list 0.08
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.35
73 TestFunctional/serial/CacheCmd/cache/cache_reload 2.32
74 TestFunctional/serial/CacheCmd/cache/delete 0.15
75 TestFunctional/serial/MinikubeKubectlCmd 0.16
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.16
77 TestFunctional/serial/ExtraConfig 32.67
78 TestFunctional/serial/ComponentHealth 0.11
79 TestFunctional/serial/LogsCmd 1.9
80 TestFunctional/serial/LogsFileCmd 1.98
81 TestFunctional/serial/InvalidService 4.27
83 TestFunctional/parallel/ConfigCmd 0.54
84 TestFunctional/parallel/DashboardCmd 10.37
85 TestFunctional/parallel/DryRun 0.77
86 TestFunctional/parallel/InternationalLanguage 0.3
87 TestFunctional/parallel/StatusCmd 1.67
91 TestFunctional/parallel/ServiceCmdConnect 10.83
92 TestFunctional/parallel/AddonsCmd 0.22
93 TestFunctional/parallel/PersistentVolumeClaim 28.63
95 TestFunctional/parallel/SSHCmd 0.78
96 TestFunctional/parallel/CpCmd 1.81
98 TestFunctional/parallel/FileSync 0.41
99 TestFunctional/parallel/CertSync 2.51
103 TestFunctional/parallel/NodeLabels 0.12
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.97
107 TestFunctional/parallel/License 0.45
109 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.83
110 TestFunctional/parallel/Version/short 0.12
111 TestFunctional/parallel/Version/components 1.07
112 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
114 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.6
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.4
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.36
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
119 TestFunctional/parallel/ImageCommands/ImageBuild 5.73
120 TestFunctional/parallel/ImageCommands/Setup 2.64
121 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.19
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.97
123 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.94
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.29
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.32
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.3
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.07
134 TestFunctional/parallel/MountCmd/any-port 9.1
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.74
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.9
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.05
138 TestFunctional/parallel/MountCmd/specific-port 2.42
139 TestFunctional/parallel/MountCmd/VerifyCleanup 3.58
140 TestFunctional/parallel/ServiceCmd/DeployApp 7.26
141 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
142 TestFunctional/parallel/ProfileCmd/profile_list 0.59
143 TestFunctional/parallel/ServiceCmd/List 0.72
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.53
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.68
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
147 TestFunctional/parallel/ServiceCmd/Format 0.58
148 TestFunctional/parallel/ServiceCmd/URL 0.65
149 TestFunctional/delete_addon-resizer_images 0.09
150 TestFunctional/delete_my-image_image 0.02
151 TestFunctional/delete_minikube_cached_images 0.02
155 TestIngressAddonLegacy/StartLegacyK8sCluster 98.58
157 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.66
158 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.69
162 TestJSONOutput/start/Command 77.86
163 TestJSONOutput/start/Audit 0
165 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/pause/Command 0.84
169 TestJSONOutput/pause/Audit 0
171 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/unpause/Command 0.76
175 TestJSONOutput/unpause/Audit 0
177 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/stop/Command 5.92
181 TestJSONOutput/stop/Audit 0
183 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
185 TestErrorJSONOutput 0.27
187 TestKicCustomNetwork/create_custom_network 44.55
188 TestKicCustomNetwork/use_default_bridge_network 41.32
189 TestKicExistingNetwork 37.39
190 TestKicCustomSubnet 35.83
191 TestKicStaticIP 33.89
192 TestMainNoArgs 0.09
193 TestMinikubeProfile 74.27
196 TestMountStart/serial/StartWithMountFirst 6.87
197 TestMountStart/serial/VerifyMountFirst 0.29
198 TestMountStart/serial/StartWithMountSecond 9.52
199 TestMountStart/serial/VerifyMountSecond 0.3
200 TestMountStart/serial/DeleteFirst 1.68
201 TestMountStart/serial/VerifyMountPostDelete 0.3
202 TestMountStart/serial/Stop 1.23
203 TestMountStart/serial/RestartStopped 7.82
204 TestMountStart/serial/VerifyMountPostStop 0.3
207 TestMultiNode/serial/FreshStart2Nodes 98.56
208 TestMultiNode/serial/DeployApp2Nodes 6.65
210 TestMultiNode/serial/AddNode 51.4
211 TestMultiNode/serial/ProfileList 0.36
212 TestMultiNode/serial/CopyFile 11.6
213 TestMultiNode/serial/StopNode 2.45
214 TestMultiNode/serial/StartAfterStop 12.79
215 TestMultiNode/serial/RestartKeepsNodes 125.03
216 TestMultiNode/serial/DeleteNode 5.17
217 TestMultiNode/serial/StopMultiNode 24.48
218 TestMultiNode/serial/RestartMultiNode 78.7
219 TestMultiNode/serial/ValidateNameConflict 37.41
224 TestPreload 181.52
226 TestScheduledStopUnix 110.92
229 TestInsufficientStorage 11.52
232 TestKubernetesUpgrade 381.26
235 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
236 TestNoKubernetes/serial/StartWithK8s 43.73
237 TestNoKubernetes/serial/StartWithStopK8s 18.18
238 TestNoKubernetes/serial/Start 9.81
239 TestNoKubernetes/serial/VerifyK8sNotRunning 0.51
240 TestNoKubernetes/serial/ProfileList 1.03
241 TestNoKubernetes/serial/Stop 1.3
242 TestNoKubernetes/serial/StartNoArgs 7.74
243 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.44
244 TestStoppedBinaryUpgrade/Setup 1.28
254 TestPause/serial/Start 52.02
256 TestPause/serial/SecondStartNoReconfiguration 42.61
260 TestPause/serial/Pause 1.1
261 TestPause/serial/VerifyStatus 0.45
262 TestPause/serial/Unpause 1.02
263 TestPause/serial/PauseAgain 1.32
264 TestPause/serial/DeletePaused 3.13
269 TestNetworkPlugins/group/false 5.78
270 TestPause/serial/VerifyDeletedResources 0.2
275 TestStartStop/group/old-k8s-version/serial/FirstStart 115.79
276 TestStartStop/group/old-k8s-version/serial/DeployApp 11.66
277 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.19
278 TestStartStop/group/old-k8s-version/serial/Stop 12.16
279 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
280 TestStartStop/group/old-k8s-version/serial/SecondStart 419.22
282 TestStartStop/group/no-preload/serial/FirstStart 65.23
283 TestStartStop/group/no-preload/serial/DeployApp 10.53
284 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.24
285 TestStartStop/group/no-preload/serial/Stop 12.11
286 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
287 TestStartStop/group/no-preload/serial/SecondStart 351.44
288 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
289 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
290 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.38
291 TestStartStop/group/old-k8s-version/serial/Pause 3.6
293 TestStartStop/group/embed-certs/serial/FirstStart 81.18
294 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.04
295 TestStartStop/group/embed-certs/serial/DeployApp 8.88
296 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.15
297 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.45
298 TestStartStop/group/no-preload/serial/Pause 4.19
299 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.06
300 TestStartStop/group/embed-certs/serial/Stop 12.25
302 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 77.51
303 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.46
304 TestStartStop/group/embed-certs/serial/SecondStart 355.88
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.5
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.26
307 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.13
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
309 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 628.52
310 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 14.03
311 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
312 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.42
313 TestStartStop/group/embed-certs/serial/Pause 3.65
315 TestStartStop/group/newest-cni/serial/FirstStart 51.91
316 TestStartStop/group/newest-cni/serial/DeployApp 0
317 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.45
318 TestStartStop/group/newest-cni/serial/Stop 1.44
319 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.41
320 TestStartStop/group/newest-cni/serial/SecondStart 32.5
321 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
324 TestStartStop/group/newest-cni/serial/Pause 3.34
325 TestNetworkPlugins/group/auto/Start 52.33
326 TestNetworkPlugins/group/auto/KubeletFlags 0.36
327 TestNetworkPlugins/group/auto/NetCatPod 11.36
328 TestNetworkPlugins/group/auto/DNS 0.21
329 TestNetworkPlugins/group/auto/Localhost 0.25
330 TestNetworkPlugins/group/auto/HairPin 0.2
331 TestNetworkPlugins/group/kindnet/Start 78.94
332 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
333 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
334 TestNetworkPlugins/group/kindnet/NetCatPod 11.35
335 TestNetworkPlugins/group/kindnet/DNS 0.23
336 TestNetworkPlugins/group/kindnet/Localhost 0.39
337 TestNetworkPlugins/group/kindnet/HairPin 0.3
338 TestNetworkPlugins/group/calico/Start 76.9
339 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.03
340 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.21
341 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.53
342 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.18
343 TestNetworkPlugins/group/custom-flannel/Start 76.6
344 TestNetworkPlugins/group/calico/ControllerPod 5.05
345 TestNetworkPlugins/group/calico/KubeletFlags 0.48
346 TestNetworkPlugins/group/calico/NetCatPod 14.52
347 TestNetworkPlugins/group/calico/DNS 0.3
348 TestNetworkPlugins/group/calico/Localhost 0.22
349 TestNetworkPlugins/group/calico/HairPin 0.21
350 TestNetworkPlugins/group/enable-default-cni/Start 88.43
351 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.53
352 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.45
353 TestNetworkPlugins/group/custom-flannel/DNS 0.26
354 TestNetworkPlugins/group/custom-flannel/Localhost 0.25
355 TestNetworkPlugins/group/custom-flannel/HairPin 0.27
356 TestNetworkPlugins/group/flannel/Start 71.94
357 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
358 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.44
359 TestNetworkPlugins/group/enable-default-cni/DNS 0.28
360 TestNetworkPlugins/group/enable-default-cni/Localhost 0.27
361 TestNetworkPlugins/group/enable-default-cni/HairPin 0.26
362 TestNetworkPlugins/group/flannel/ControllerPod 5.04
363 TestNetworkPlugins/group/bridge/Start 95.27
364 TestNetworkPlugins/group/flannel/KubeletFlags 0.43
365 TestNetworkPlugins/group/flannel/NetCatPod 13.43
366 TestNetworkPlugins/group/flannel/DNS 0.26
367 TestNetworkPlugins/group/flannel/Localhost 0.24
368 TestNetworkPlugins/group/flannel/HairPin 0.27
369 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
370 TestNetworkPlugins/group/bridge/NetCatPod 10.34
371 TestNetworkPlugins/group/bridge/DNS 0.24
372 TestNetworkPlugins/group/bridge/Localhost 0.18
373 TestNetworkPlugins/group/bridge/HairPin 0.2
x
+
TestDownloadOnly/v1.16.0/json-events (14.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-132234 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-132234 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (14.098546954s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (14.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-132234
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-132234: exit status 85 (99.332281ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-132234 | jenkins | v1.31.2 | 09 Oct 23 22:54 UTC |          |
	|         | -p download-only-132234        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/09 22:54:40
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 22:54:40.419423 1543220 out.go:296] Setting OutFile to fd 1 ...
	I1009 22:54:40.419610 1543220 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 22:54:40.419620 1543220 out.go:309] Setting ErrFile to fd 2...
	I1009 22:54:40.419626 1543220 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 22:54:40.419859 1543220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-1537865/.minikube/bin
	W1009 22:54:40.420006 1543220 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17375-1537865/.minikube/config/config.json: open /home/jenkins/minikube-integration/17375-1537865/.minikube/config/config.json: no such file or directory
	I1009 22:54:40.420378 1543220 out.go:303] Setting JSON to true
	I1009 22:54:40.421225 1543220 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23824,"bootTime":1696868257,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1009 22:54:40.421294 1543220 start.go:138] virtualization:  
	I1009 22:54:40.425129 1543220 out.go:97] [download-only-132234] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1009 22:54:40.427604 1543220 out.go:169] MINIKUBE_LOCATION=17375
	W1009 22:54:40.425430 1543220 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball: no such file or directory
	I1009 22:54:40.425493 1543220 notify.go:220] Checking for updates...
	I1009 22:54:40.429572 1543220 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 22:54:40.431887 1543220 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig
	I1009 22:54:40.434403 1543220 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	I1009 22:54:40.436516 1543220 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1009 22:54:40.440948 1543220 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 22:54:40.441223 1543220 driver.go:378] Setting default libvirt URI to qemu:///system
	I1009 22:54:40.464801 1543220 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1009 22:54:40.464884 1543220 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 22:54:40.552307 1543220 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-10-09 22:54:40.542493253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1009 22:54:40.552416 1543220 docker.go:295] overlay module found
	I1009 22:54:40.554842 1543220 out.go:97] Using the docker driver based on user configuration
	I1009 22:54:40.554870 1543220 start.go:298] selected driver: docker
	I1009 22:54:40.554877 1543220 start.go:902] validating driver "docker" against <nil>
	I1009 22:54:40.554985 1543220 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 22:54:40.621009 1543220 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:42 SystemTime:2023-10-09 22:54:40.611710339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1009 22:54:40.621193 1543220 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1009 22:54:40.621489 1543220 start_flags.go:386] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1009 22:54:40.621644 1543220 start_flags.go:908] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 22:54:40.624064 1543220 out.go:169] Using Docker driver with root privileges
	I1009 22:54:40.626081 1543220 cni.go:84] Creating CNI manager for ""
	I1009 22:54:40.626101 1543220 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 22:54:40.626122 1543220 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 22:54:40.626138 1543220 start_flags.go:323] config:
	{Name:download-only-132234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-132234 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 22:54:40.628248 1543220 out.go:97] Starting control plane node download-only-132234 in cluster download-only-132234
	I1009 22:54:40.628267 1543220 cache.go:122] Beginning downloading kic base image for docker with crio
	I1009 22:54:40.630346 1543220 out.go:97] Pulling base image ...
	I1009 22:54:40.630384 1543220 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1009 22:54:40.630550 1543220 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1009 22:54:40.648969 1543220 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1009 22:54:40.649170 1543220 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1009 22:54:40.649271 1543220 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1009 22:54:40.732734 1543220 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1009 22:54:40.732758 1543220 cache.go:57] Caching tarball of preloaded images
	I1009 22:54:40.732913 1543220 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1009 22:54:40.735451 1543220 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1009 22:54:40.735484 1543220 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1009 22:54:40.873334 1543220 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1009 22:54:45.593996 1543220 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-132234"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (11.78s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-132234 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-132234 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.78378315s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (11.78s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-132234
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-132234: exit status 85 (95.635753ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-132234 | jenkins | v1.31.2 | 09 Oct 23 22:54 UTC |          |
	|         | -p download-only-132234        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-132234 | jenkins | v1.31.2 | 09 Oct 23 22:54 UTC |          |
	|         | -p download-only-132234        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/09 22:54:54
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 22:54:54.608682 1543298 out.go:296] Setting OutFile to fd 1 ...
	I1009 22:54:54.608844 1543298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 22:54:54.608854 1543298 out.go:309] Setting ErrFile to fd 2...
	I1009 22:54:54.608860 1543298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 22:54:54.609111 1543298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-1537865/.minikube/bin
	W1009 22:54:54.609245 1543298 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17375-1537865/.minikube/config/config.json: open /home/jenkins/minikube-integration/17375-1537865/.minikube/config/config.json: no such file or directory
	I1009 22:54:54.609467 1543298 out.go:303] Setting JSON to true
	I1009 22:54:54.610335 1543298 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":23838,"bootTime":1696868257,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1009 22:54:54.610408 1543298 start.go:138] virtualization:  
	I1009 22:54:54.613000 1543298 out.go:97] [download-only-132234] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1009 22:54:54.613408 1543298 notify.go:220] Checking for updates...
	I1009 22:54:54.616780 1543298 out.go:169] MINIKUBE_LOCATION=17375
	I1009 22:54:54.618808 1543298 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 22:54:54.621165 1543298 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig
	I1009 22:54:54.623299 1543298 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	I1009 22:54:54.625324 1543298 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1009 22:54:54.629318 1543298 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 22:54:54.629919 1543298 config.go:182] Loaded profile config "download-only-132234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1009 22:54:54.629966 1543298 start.go:810] api.Load failed for download-only-132234: filestore "download-only-132234": Docker machine "download-only-132234" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1009 22:54:54.630105 1543298 driver.go:378] Setting default libvirt URI to qemu:///system
	W1009 22:54:54.630134 1543298 start.go:810] api.Load failed for download-only-132234: filestore "download-only-132234": Docker machine "download-only-132234" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1009 22:54:54.655615 1543298 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1009 22:54:54.655703 1543298 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 22:54:54.740121 1543298 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-10-09 22:54:54.729682839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1009 22:54:54.740238 1543298 docker.go:295] overlay module found
	I1009 22:54:54.742283 1543298 out.go:97] Using the docker driver based on existing profile
	I1009 22:54:54.742313 1543298 start.go:298] selected driver: docker
	I1009 22:54:54.742332 1543298 start.go:902] validating driver "docker" against &{Name:download-only-132234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-132234 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 22:54:54.742512 1543298 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 22:54:54.812936 1543298 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:28 OomKillDisable:true NGoroutines:38 SystemTime:2023-10-09 22:54:54.80231212 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1009 22:54:54.813382 1543298 cni.go:84] Creating CNI manager for ""
	I1009 22:54:54.813400 1543298 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 22:54:54.813413 1543298 start_flags.go:323] config:
	{Name:download-only-132234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-132234 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1009 22:54:54.816080 1543298 out.go:97] Starting control plane node download-only-132234 in cluster download-only-132234
	I1009 22:54:54.816109 1543298 cache.go:122] Beginning downloading kic base image for docker with crio
	I1009 22:54:54.818171 1543298 out.go:97] Pulling base image ...
	I1009 22:54:54.818196 1543298 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1009 22:54:54.818375 1543298 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local docker daemon
	I1009 22:54:54.835544 1543298 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae to local cache
	I1009 22:54:54.835672 1543298 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory
	I1009 22:54:54.835697 1543298 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae in local cache directory, skipping pull
	I1009 22:54:54.835702 1543298 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae exists in cache, skipping pull
	I1009 22:54:54.835710 1543298 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae as a tarball
	I1009 22:54:54.892534 1543298 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I1009 22:54:54.892562 1543298 cache.go:57] Caching tarball of preloaded images
	I1009 22:54:54.892739 1543298 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I1009 22:54:54.895470 1543298 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1009 22:54:54.895499 1543298 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 ...
	I1009 22:54:55.008868 1543298 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:ec283948b04358f92432bdd325b7fb0b -> /home/jenkins/minikube-integration/17375-1537865/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-132234"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-132234
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-063694 --alsologtostderr --binary-mirror http://127.0.0.1:45427 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-063694" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-063694
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.11s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-749116
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-749116: exit status 85 (108.852812ms)

                                                
                                                
-- stdout --
	* Profile "addons-749116" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-749116"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.11s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-749116
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-749116: exit status 85 (102.129337ms)

                                                
                                                
-- stdout --
	* Profile "addons-749116" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-749116"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (164.94s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-749116 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-749116 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m44.93973826s)
--- PASS: TestAddons/Setup (164.94s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 52.078063ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-fsrvj" [119f71e8-0a1b-4211-89c0-a57d00b658a4] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.022360274s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2scnj" [48eb5a8b-2d5b-4709-9800-45a0a2ca64eb] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.014375988s
addons_test.go:339: (dbg) Run:  kubectl --context addons-749116 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-749116 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-749116 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.361766318s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-749116 ip
2023/10/09 22:58:12 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-749116 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.54s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.06s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-ll8zr" [1f1e7972-0cc3-40e5-a654-bd1469fccad5] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.01477267s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-749116
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-749116: (6.041340671s)
--- PASS: TestAddons/parallel/InspektorGadget (11.06s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.96s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 4.468855ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-5s7nh" [5f9b199b-4c0a-4b45-98d4-c13e2a5dc381] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014112384s
addons_test.go:414: (dbg) Run:  kubectl --context addons-749116 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-749116 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.96s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 55.556695ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-749116 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-749116 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e1363459-0161-479d-8889-1221f57274c2] Pending
helpers_test.go:344: "task-pv-pod" [e1363459-0161-479d-8889-1221f57274c2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e1363459-0161-479d-8889-1221f57274c2] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.015843657s
addons_test.go:583: (dbg) Run:  kubectl --context addons-749116 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-749116 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-749116 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-749116 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-749116 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-749116 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-749116 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-749116 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b1362741-b4aa-4fb7-8193-e4a7ec909f86] Pending
helpers_test.go:344: "task-pv-pod-restore" [b1362741-b4aa-4fb7-8193-e4a7ec909f86] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b1362741-b4aa-4fb7-8193-e4a7ec909f86] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.020505794s
addons_test.go:625: (dbg) Run:  kubectl --context addons-749116 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-749116 delete pod task-pv-pod-restore: (1.08677764s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-749116 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-749116 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-arm64 -p addons-749116 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-arm64 -p addons-749116 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.865039154s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-arm64 -p addons-749116 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.71s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-749116 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-749116 --alsologtostderr -v=1: (1.531891897s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-94b766c-z5lhq" [a5c8d7a6-87da-4a13-9097-597b1e0aaf8d] Pending
helpers_test.go:344: "headlamp-94b766c-z5lhq" [a5c8d7a6-87da-4a13-9097-597b1e0aaf8d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-94b766c-z5lhq" [a5c8d7a6-87da-4a13-9097-597b1e0aaf8d] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.036086229s
--- PASS: TestAddons/parallel/Headlamp (12.57s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-56665cdfc-st8cz" [8395f4ff-693a-4926-b3fa-0f7109add915] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.013088205s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-749116
--- PASS: TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.1s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-749116 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-749116 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-749116 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [0be2b186-de16-470d-b504-42f755a88d4f] Pending
helpers_test.go:344: "test-local-path" [0be2b186-de16-470d-b504-42f755a88d4f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [0be2b186-de16-470d-b504-42f755a88d4f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [0be2b186-de16-470d-b504-42f755a88d4f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.010759384s
addons_test.go:890: (dbg) Run:  kubectl --context addons-749116 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-749116 ssh "cat /opt/local-path-provisioner/pvc-590f73da-98a3-4a3a-b26d-19a0557f0a9c_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-749116 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-749116 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-749116 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-arm64 -p addons-749116 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.483608433s)
--- PASS: TestAddons/parallel/LocalPath (52.10s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-q2tdr" [689841eb-bbcf-4415-9a4f-66a28c9b2621] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.036271936s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-749116
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.64s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.49s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-749116 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-749116 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.49s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.48s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-749116
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-749116: (12.140576235s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-749116
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-749116
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-749116
--- PASS: TestAddons/StoppedEnableDisable (12.48s)

                                                
                                    
x
+
TestCertOptions (37.22s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-834749 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-834749 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (34.413875636s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-834749 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-834749 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-834749 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-834749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-834749
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-834749: (2.048209336s)
--- PASS: TestCertOptions (37.22s)

                                                
                                    
x
+
TestCertExpiration (278.66s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-980749 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1009 23:42:53.339405 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-980749 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.533960186s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-980749 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E1009 23:46:11.759099 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-980749 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (55.658716555s)
helpers_test.go:175: Cleaning up "cert-expiration-980749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-980749
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-980749: (2.469812712s)
--- PASS: TestCertExpiration (278.66s)

                                                
                                    
x
+
TestForceSystemdFlag (42.13s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-387939 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-387939 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.293132129s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-387939 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-387939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-387939
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-387939: (2.492278101s)
--- PASS: TestForceSystemdFlag (42.13s)

                                                
                                    
x
+
TestForceSystemdEnv (44.32s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-577453 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-577453 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.333869993s)
helpers_test.go:175: Cleaning up "force-systemd-env-577453" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-577453
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-577453: (2.987122868s)
--- PASS: TestForceSystemdEnv (44.32s)

                                                
                                    
x
+
TestErrorSpam/setup (33.15s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-199071 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-199071 --driver=docker  --container-runtime=crio
E1009 23:02:53.339283 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
E1009 23:02:53.344956 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
E1009 23:02:53.355219 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
E1009 23:02:53.375465 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
E1009 23:02:53.415725 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
E1009 23:02:53.495988 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
E1009 23:02:53.656224 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
E1009 23:02:53.976727 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
E1009 23:02:54.617919 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
E1009 23:02:55.898155 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
E1009 23:02:58.459244 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
E1009 23:03:03.579523 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-199071 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-199071 --driver=docker  --container-runtime=crio: (33.152633668s)
--- PASS: TestErrorSpam/setup (33.15s)

                                                
                                    
x
+
TestErrorSpam/start (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199071 --log_dir /tmp/nospam-199071 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199071 --log_dir /tmp/nospam-199071 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199071 --log_dir /tmp/nospam-199071 start --dry-run
--- PASS: TestErrorSpam/start (0.94s)

                                                
                                    
x
+
TestErrorSpam/status (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199071 --log_dir /tmp/nospam-199071 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199071 --log_dir /tmp/nospam-199071 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199071 --log_dir /tmp/nospam-199071 status
--- PASS: TestErrorSpam/status (1.16s)

                                                
                                    
x
+
TestErrorSpam/pause (1.92s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199071 --log_dir /tmp/nospam-199071 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199071 --log_dir /tmp/nospam-199071 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199071 --log_dir /tmp/nospam-199071 pause
E1009 23:03:13.820584 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
--- PASS: TestErrorSpam/pause (1.92s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.09s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199071 --log_dir /tmp/nospam-199071 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199071 --log_dir /tmp/nospam-199071 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199071 --log_dir /tmp/nospam-199071 unpause
--- PASS: TestErrorSpam/unpause (2.09s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199071 --log_dir /tmp/nospam-199071 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-199071 --log_dir /tmp/nospam-199071 stop: (1.270638148s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199071 --log_dir /tmp/nospam-199071 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199071 --log_dir /tmp/nospam-199071 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17375-1537865/.minikube/files/etc/test/nested/copy/1543215/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.4s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-634060 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1009 23:03:34.301627 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
E1009 23:04:15.261843 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-634060 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m16.403872846s)
--- PASS: TestFunctional/serial/StartWithProxy (76.40s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-634060 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-634060 --alsologtostderr -v=8: (42.316967269s)
functional_test.go:659: soft start took 42.317492826s for "functional-634060" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.32s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-634060 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-634060 cache add registry.k8s.io/pause:3.1: (1.38386839s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-634060 cache add registry.k8s.io/pause:3.3: (1.445797257s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-634060 cache add registry.k8s.io/pause:latest: (1.316380572s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-634060 /tmp/TestFunctionalserialCacheCmdcacheadd_local3589134807/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 cache add minikube-local-cache-test:functional-634060
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 cache delete minikube-local-cache-test:functional-634060
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-634060
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-634060 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (354.241728ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-634060 cache reload: (1.196688381s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 kubectl -- --context functional-634060 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-634060 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.16s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-634060 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1009 23:05:37.183253 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-634060 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.664795165s)
functional_test.go:757: restart took 32.664908102s for "functional-634060" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-634060 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-634060 logs: (1.902036305s)
--- PASS: TestFunctional/serial/LogsCmd (1.90s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 logs --file /tmp/TestFunctionalserialLogsFileCmd4223778206/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-634060 logs --file /tmp/TestFunctionalserialLogsFileCmd4223778206/001/logs.txt: (1.977893914s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.98s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.27s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-634060 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-634060
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-634060: exit status 115 (463.484173ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31906 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-634060 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-634060 config get cpus: exit status 14 (87.320751ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-634060 config get cpus: exit status 14 (97.604458ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-634060 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-634060 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1571580: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-634060 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-634060 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (287.999191ms)

                                                
                                                
-- stdout --
	* [functional-634060] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 23:07:03.790014 1570729 out.go:296] Setting OutFile to fd 1 ...
	I1009 23:07:03.790180 1570729 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:07:03.790185 1570729 out.go:309] Setting ErrFile to fd 2...
	I1009 23:07:03.790191 1570729 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:07:03.790456 1570729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-1537865/.minikube/bin
	I1009 23:07:03.790902 1570729 out.go:303] Setting JSON to false
	I1009 23:07:03.792214 1570729 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":24567,"bootTime":1696868257,"procs":471,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1009 23:07:03.792300 1570729 start.go:138] virtualization:  
	I1009 23:07:03.796180 1570729 out.go:177] * [functional-634060] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1009 23:07:03.798835 1570729 out.go:177]   - MINIKUBE_LOCATION=17375
	I1009 23:07:03.800730 1570729 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 23:07:03.798944 1570729 notify.go:220] Checking for updates...
	I1009 23:07:03.806116 1570729 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig
	I1009 23:07:03.808347 1570729 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	I1009 23:07:03.810032 1570729 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 23:07:03.811822 1570729 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 23:07:03.814372 1570729 config.go:182] Loaded profile config "functional-634060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1009 23:07:03.815326 1570729 driver.go:378] Setting default libvirt URI to qemu:///system
	I1009 23:07:03.852510 1570729 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1009 23:07:03.852652 1570729 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 23:07:03.977939 1570729 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:46 SystemTime:2023-10-09 23:07:03.966289736 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1009 23:07:03.978098 1570729 docker.go:295] overlay module found
	I1009 23:07:03.980166 1570729 out.go:177] * Using the docker driver based on existing profile
	I1009 23:07:03.982015 1570729 start.go:298] selected driver: docker
	I1009 23:07:03.982034 1570729 start.go:902] validating driver "docker" against &{Name:functional-634060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-634060 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 23:07:03.982147 1570729 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 23:07:03.984649 1570729 out.go:177] 
	W1009 23:07:03.987651 1570729 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1009 23:07:03.989373 1570729 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-634060 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-634060 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-634060 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (302.933481ms)

                                                
                                                
-- stdout --
	* [functional-634060] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 23:07:05.891225 1571173 out.go:296] Setting OutFile to fd 1 ...
	I1009 23:07:05.891431 1571173 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:07:05.891443 1571173 out.go:309] Setting ErrFile to fd 2...
	I1009 23:07:05.891449 1571173 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:07:05.891844 1571173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-1537865/.minikube/bin
	I1009 23:07:05.892266 1571173 out.go:303] Setting JSON to false
	I1009 23:07:05.893568 1571173 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":24569,"bootTime":1696868257,"procs":471,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1009 23:07:05.893655 1571173 start.go:138] virtualization:  
	I1009 23:07:05.896095 1571173 out.go:177] * [functional-634060] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	I1009 23:07:05.898571 1571173 out.go:177]   - MINIKUBE_LOCATION=17375
	I1009 23:07:05.898702 1571173 notify.go:220] Checking for updates...
	I1009 23:07:05.902299 1571173 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 23:07:05.904103 1571173 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig
	I1009 23:07:05.906419 1571173 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	I1009 23:07:05.908698 1571173 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 23:07:05.910633 1571173 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 23:07:05.913049 1571173 config.go:182] Loaded profile config "functional-634060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1009 23:07:05.913705 1571173 driver.go:378] Setting default libvirt URI to qemu:///system
	I1009 23:07:05.946764 1571173 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1009 23:07:05.946864 1571173 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 23:07:06.087626 1571173 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-10-09 23:07:06.075701899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1009 23:07:06.087737 1571173 docker.go:295] overlay module found
	I1009 23:07:06.090528 1571173 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1009 23:07:06.092499 1571173 start.go:298] selected driver: docker
	I1009 23:07:06.092522 1571173 start.go:902] validating driver "docker" against &{Name:functional-634060 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696360059-17345@sha256:76d99edd1576614d5c20a839dd16ae1d7c810f3b909a01797063d483159ea3ae Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-634060 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1009 23:07:06.092647 1571173 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 23:07:06.096233 1571173 out.go:177] 
	W1009 23:07:06.098865 1571173 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1009 23:07:06.100880 1571173 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-634060 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-634060 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-x7kb7" [421369da-5f92-4a24-a973-0666d31f7399] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-x7kb7" [421369da-5f92-4a24-a973-0666d31f7399] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.023980931s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30815
functional_test.go:1674: http://192.168.49.2:30815: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-x7kb7

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30815
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.83s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [27e43a97-4124-45ee-84d6-d3d1ec184544] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.013570486s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-634060 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-634060 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-634060 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-634060 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4df372f3-5860-487f-8ffa-4304e3beee3a] Pending
helpers_test.go:344: "sp-pod" [4df372f3-5860-487f-8ffa-4304e3beee3a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4df372f3-5860-487f-8ffa-4304e3beee3a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.012122633s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-634060 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-634060 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-634060 delete -f testdata/storage-provisioner/pod.yaml: (1.049792412s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-634060 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [673e04fd-eb21-4213-81bc-78a1b8f1405b] Pending
helpers_test.go:344: "sp-pod" [673e04fd-eb21-4213-81bc-78a1b8f1405b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [673e04fd-eb21-4213-81bc-78a1b8f1405b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.020772557s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-634060 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.63s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh -n functional-634060 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 cp functional-634060:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd265086463/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh -n functional-634060 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1543215/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "sudo cat /etc/test/nested/copy/1543215/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1543215.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "sudo cat /etc/ssl/certs/1543215.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1543215.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "sudo cat /usr/share/ca-certificates/1543215.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/15432152.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "sudo cat /etc/ssl/certs/15432152.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/15432152.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "sudo cat /usr/share/ca-certificates/15432152.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-634060 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-634060 ssh "sudo systemctl is-active docker": exit status 1 (546.045319ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-634060 ssh "sudo systemctl is-active containerd": exit status 1 (425.588751ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-634060 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-634060 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-634060 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1567076: os: process already finished
helpers_test.go:502: unable to terminate pid 1566916: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-634060 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-634060 version -o=json --components: (1.073526756s)
--- PASS: TestFunctional/parallel/Version/components (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-634060 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-634060 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [0717c4ea-09c2-4847-8b5e-713e433a768c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [0717c4ea-09c2-4847-8b5e-713e433a768c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.034058227s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-634060 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-634060
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-634060 image ls --format short --alsologtostderr:
I1009 23:07:08.364257 1571571 out.go:296] Setting OutFile to fd 1 ...
I1009 23:07:08.365913 1571571 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1009 23:07:08.365953 1571571 out.go:309] Setting ErrFile to fd 2...
I1009 23:07:08.365973 1571571 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1009 23:07:08.366280 1571571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-1537865/.minikube/bin
I1009 23:07:08.368359 1571571 config.go:182] Loaded profile config "functional-634060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1009 23:07:08.368549 1571571 config.go:182] Loaded profile config "functional-634060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1009 23:07:08.369093 1571571 cli_runner.go:164] Run: docker container inspect functional-634060 --format={{.State.Status}}
I1009 23:07:08.425041 1571571 ssh_runner.go:195] Run: systemctl --version
I1009 23:07:08.425096 1571571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-634060
I1009 23:07:08.481354 1571571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/functional-634060/id_rsa Username:docker}
I1009 23:07:08.593935 1571571 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-634060 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | 2a4fbb36e9660 | 196MB  |
| gcr.io/google-containers/addon-resizer  | functional-634060  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| localhost/my-image                      | functional-634060  | 24fa9eb5be5c9 | 1.64MB |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| docker.io/library/nginx                 | alpine             | df8fd1ca35d66 | 45.3MB |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/kube-apiserver          | v1.28.2            | 30bb499447fe1 | 121MB  |
| registry.k8s.io/kube-controller-manager | v1.28.2            | 89d57b83c1786 | 117MB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| registry.k8s.io/kube-scheduler          | v1.28.2            | 64fc40cee3716 | 59.2MB |
| registry.k8s.io/kube-proxy              | v1.28.2            | 7da62c127fc0f | 69.9MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-634060 image ls --format table --alsologtostderr:
I1009 23:07:15.085724 1572039 out.go:296] Setting OutFile to fd 1 ...
I1009 23:07:15.085991 1572039 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1009 23:07:15.086022 1572039 out.go:309] Setting ErrFile to fd 2...
I1009 23:07:15.086041 1572039 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1009 23:07:15.086359 1572039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-1537865/.minikube/bin
I1009 23:07:15.087308 1572039 config.go:182] Loaded profile config "functional-634060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1009 23:07:15.087543 1572039 config.go:182] Loaded profile config "functional-634060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1009 23:07:15.088353 1572039 cli_runner.go:164] Run: docker container inspect functional-634060 --format={{.State.Status}}
I1009 23:07:15.117495 1572039 ssh_runner.go:195] Run: systemctl --version
I1009 23:07:15.117570 1572039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-634060
I1009 23:07:15.163217 1572039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/functional-634060/id_rsa Username:docker}
I1009 23:07:15.285479 1572039 ssh_runner.go:195] Run: sudo crictl images --output json
2023/10/09 23:07:16 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-634060 image ls --format json --alsologtostderr:
[{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"82bec8e7f36040d52acabad0d4cef30e95aa061ebb0d32ad92881d4067c4299a","repoDigests":["docker.io/library/deabf5b4c893c0c4dbd54979e7d85e586443e36a6fb9c44ba54f91c526475367-tmp@sha256:b209196de7ab4c91e7d81041da47ca0d4384c06c0e5ca2cc9d3ce8b3ee535b66"],"repoTags":[],"size":"1637643"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"30bb499447fe
1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d","registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"121054158"},{"id":"7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa","repoDigests":["registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf","registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"69926807"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e0
9a"],"repoTags":[],"size":"42263767"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"
829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab","registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"59188020"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","
repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"df8fd1ca35d66acf0c88cf3b0364ae8bd392860d54075094884e3d014e4d186b","repoDigests":["docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef","docker.io/library/nginx@sha256:96032dda68e09456804a4939486df02acd5459c1e2b81c0eed017130098ca003"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45331256"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f9
52adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"24fa9eb5be5c904c67173d4ada81a4ea5acf291c8e8ee97116524d6a0c7d5c5d","repoDigests":["localhost/my-image@sha256:1a14f48bc50518327c4f0f6b5134a0290d3b846922c5ee32ff00d467ec35cd42"],"repoTags":["localhost/my-image:functional-634060"],"size":"1640225"},{"id":"89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f
002c9238ce0a4ac22b5c8","registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"117187380"},{"id":"2a4fbb36e96607b16e5af2e24dc6a1025a4795520c98c6b9ead9c4113617cb73","repoDigests":["docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755","docker.io/library/nginx@sha256:65cd8f49af749786a95ea0c46a76c3269bb21cfcb0f0a81d2bbf0def96fb6324"],"repoTags":["docker.io/library/nginx:latest"],"size":"196196620"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-634060"],"size":"34114467"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596
ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-634060 image ls --format json --alsologtostderr:
I1009 23:07:14.762312 1571971 out.go:296] Setting OutFile to fd 1 ...
I1009 23:07:14.762516 1571971 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1009 23:07:14.762522 1571971 out.go:309] Setting ErrFile to fd 2...
I1009 23:07:14.762528 1571971 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1009 23:07:14.762771 1571971 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-1537865/.minikube/bin
I1009 23:07:14.763524 1571971 config.go:182] Loaded profile config "functional-634060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1009 23:07:14.763674 1571971 config.go:182] Loaded profile config "functional-634060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1009 23:07:14.764175 1571971 cli_runner.go:164] Run: docker container inspect functional-634060 --format={{.State.Status}}
I1009 23:07:14.796207 1571971 ssh_runner.go:195] Run: systemctl --version
I1009 23:07:14.796273 1571971 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-634060
I1009 23:07:14.820562 1571971 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/functional-634060/id_rsa Username:docker}
I1009 23:07:14.925037 1571971 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-634060 image ls --format yaml --alsologtostderr:
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 2a4fbb36e96607b16e5af2e24dc6a1025a4795520c98c6b9ead9c4113617cb73
repoDigests:
- docker.io/library/nginx@sha256:32da30332506740a2f7c34d5dc70467b7f14ec67d912703568daff790ab3f755
- docker.io/library/nginx@sha256:65cd8f49af749786a95ea0c46a76c3269bb21cfcb0f0a81d2bbf0def96fb6324
repoTags:
- docker.io/library/nginx:latest
size: "196196620"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-634060
size: "34114467"
- id: 89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8
- registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "117187380"
- id: 64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab
- registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "59188020"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa
repoDigests:
- registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf
- registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "69926807"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: df8fd1ca35d66acf0c88cf3b0364ae8bd392860d54075094884e3d014e4d186b
repoDigests:
- docker.io/library/nginx@sha256:4c93a3bd8bf95412889dd84213570102176b6052d88bb828eaf449c56aca55ef
- docker.io/library/nginx@sha256:96032dda68e09456804a4939486df02acd5459c1e2b81c0eed017130098ca003
repoTags:
- docker.io/library/nginx:alpine
size: "45331256"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d
- registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "121054158"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-634060 image ls --format yaml --alsologtostderr:
I1009 23:07:08.737041 1571645 out.go:296] Setting OutFile to fd 1 ...
I1009 23:07:08.737308 1571645 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1009 23:07:08.737336 1571645 out.go:309] Setting ErrFile to fd 2...
I1009 23:07:08.737354 1571645 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1009 23:07:08.737623 1571645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-1537865/.minikube/bin
I1009 23:07:08.738333 1571645 config.go:182] Loaded profile config "functional-634060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1009 23:07:08.738505 1571645 config.go:182] Loaded profile config "functional-634060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1009 23:07:08.739076 1571645 cli_runner.go:164] Run: docker container inspect functional-634060 --format={{.State.Status}}
I1009 23:07:08.760737 1571645 ssh_runner.go:195] Run: systemctl --version
I1009 23:07:08.760798 1571645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-634060
I1009 23:07:08.785735 1571645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/functional-634060/id_rsa Username:docker}
I1009 23:07:08.880899 1571645 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-634060 ssh pgrep buildkitd: exit status 1 (356.655005ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 image build -t localhost/my-image:functional-634060 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-634060 image build -t localhost/my-image:functional-634060 testdata/build --alsologtostderr: (4.850401343s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-634060 image build -t localhost/my-image:functional-634060 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 82bec8e7f36
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-634060
--> 24fa9eb5be5
Successfully tagged localhost/my-image:functional-634060
24fa9eb5be5c904c67173d4ada81a4ea5acf291c8e8ee97116524d6a0c7d5c5d
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-634060 image build -t localhost/my-image:functional-634060 testdata/build --alsologtostderr:
I1009 23:07:09.398295 1571719 out.go:296] Setting OutFile to fd 1 ...
I1009 23:07:09.400130 1571719 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1009 23:07:09.400145 1571719 out.go:309] Setting ErrFile to fd 2...
I1009 23:07:09.400166 1571719 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1009 23:07:09.400529 1571719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-1537865/.minikube/bin
I1009 23:07:09.401320 1571719 config.go:182] Loaded profile config "functional-634060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1009 23:07:09.402066 1571719 config.go:182] Loaded profile config "functional-634060": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I1009 23:07:09.402737 1571719 cli_runner.go:164] Run: docker container inspect functional-634060 --format={{.State.Status}}
I1009 23:07:09.426592 1571719 ssh_runner.go:195] Run: systemctl --version
I1009 23:07:09.426651 1571719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-634060
I1009 23:07:09.452764 1571719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34369 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/functional-634060/id_rsa Username:docker}
I1009 23:07:09.551449 1571719 build_images.go:151] Building image from path: /tmp/build.1663069802.tar
I1009 23:07:09.551526 1571719 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1009 23:07:09.583952 1571719 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1663069802.tar
I1009 23:07:09.589762 1571719 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1663069802.tar: stat -c "%s %y" /var/lib/minikube/build/build.1663069802.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1663069802.tar': No such file or directory
I1009 23:07:09.589793 1571719 ssh_runner.go:362] scp /tmp/build.1663069802.tar --> /var/lib/minikube/build/build.1663069802.tar (3072 bytes)
I1009 23:07:09.653090 1571719 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1663069802
I1009 23:07:09.675871 1571719 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1663069802 -xf /var/lib/minikube/build/build.1663069802.tar
I1009 23:07:09.698994 1571719 crio.go:297] Building image: /var/lib/minikube/build/build.1663069802
I1009 23:07:09.699066 1571719 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-634060 /var/lib/minikube/build/build.1663069802 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1009 23:07:14.097915 1571719 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-634060 /var/lib/minikube/build/build.1663069802 --cgroup-manager=cgroupfs: (4.398821791s)
I1009 23:07:14.097978 1571719 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1663069802
I1009 23:07:14.113045 1571719 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1663069802.tar
I1009 23:07:14.129189 1571719 build_images.go:207] Built localhost/my-image:functional-634060 from /tmp/build.1663069802.tar
I1009 23:07:14.129217 1571719 build_images.go:123] succeeded building to: functional-634060
I1009 23:07:14.129222 1571719 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.606155376s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-634060
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 image load --daemon gcr.io/google-containers/addon-resizer:functional-634060 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-634060 image load --daemon gcr.io/google-containers/addon-resizer:functional-634060 --alsologtostderr: (3.915397572s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 image load --daemon gcr.io/google-containers/addon-resizer:functional-634060 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-634060 image load --daemon gcr.io/google-containers/addon-resizer:functional-634060 --alsologtostderr: (2.699062646s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.050195557s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-634060
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 image load --daemon gcr.io/google-containers/addon-resizer:functional-634060 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-634060 image load --daemon gcr.io/google-containers/addon-resizer:functional-634060 --alsologtostderr: (4.531099734s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.94s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-634060 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.220.205 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-634060 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 image save gcr.io/google-containers/addon-resizer:functional-634060 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-arm64 -p functional-634060 image save gcr.io/google-containers/addon-resizer:functional-634060 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.067902844s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-634060 /tmp/TestFunctionalparallelMountCmdany-port2764883335/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696892789211035031" to /tmp/TestFunctionalparallelMountCmdany-port2764883335/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696892789211035031" to /tmp/TestFunctionalparallelMountCmdany-port2764883335/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696892789211035031" to /tmp/TestFunctionalparallelMountCmdany-port2764883335/001/test-1696892789211035031
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-634060 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (711.644617ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  9 23:06 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  9 23:06 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  9 23:06 test-1696892789211035031
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh cat /mount-9p/test-1696892789211035031
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-634060 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [b0916e1c-4991-4f59-8b15-54e58b4f3a2a] Pending
helpers_test.go:344: "busybox-mount" [b0916e1c-4991-4f59-8b15-54e58b4f3a2a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [b0916e1c-4991-4f59-8b15-54e58b4f3a2a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [b0916e1c-4991-4f59-8b15-54e58b4f3a2a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.018536754s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-634060 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-634060 /tmp/TestFunctionalparallelMountCmdany-port2764883335/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 image rm gcr.io/google-containers/addon-resizer:functional-634060 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-634060 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.623211117s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-634060
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 image save --daemon gcr.io/google-containers/addon-resizer:functional-634060 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-arm64 -p functional-634060 image save --daemon gcr.io/google-containers/addon-resizer:functional-634060 --alsologtostderr: (1.004537845s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-634060
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-634060 /tmp/TestFunctionalparallelMountCmdspecific-port3472459470/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-634060 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (484.050334ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-634060 /tmp/TestFunctionalparallelMountCmdspecific-port3472459470/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-634060 ssh "sudo umount -f /mount-9p": exit status 1 (342.955039ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-634060 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-634060 /tmp/TestFunctionalparallelMountCmdspecific-port3472459470/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (3.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-634060 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2768050312/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-634060 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2768050312/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-634060 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2768050312/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-634060 ssh "findmnt -T" /mount1: exit status 1 (1.441949157s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-634060 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-634060 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2768050312/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-634060 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2768050312/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-634060 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2768050312/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (3.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-634060 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-634060 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-kpd6k" [48ddc02c-3698-4ada-9b86-eac2a75391cb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-kpd6k" [48ddc02c-3698-4ada-9b86-eac2a75391cb] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.021667903s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "471.624687ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "120.318877ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "446.959772ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "77.741504ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 service list -o json
functional_test.go:1493: Took "681.369305ms" to run "out/minikube-linux-arm64 -p functional-634060 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:31817
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-634060 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:31817
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.65s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-634060
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-634060
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-634060
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (98.58s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-789037 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1009 23:07:53.339945 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
E1009 23:08:21.024244 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-789037 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m38.576381164s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (98.58s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.66s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-789037 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-789037 addons enable ingress --alsologtostderr -v=5: (13.663847867s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.66s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.69s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-789037 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.69s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-622977 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1009 23:12:33.679595 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
E1009 23:12:53.340174 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-622977 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m17.856638428s)
--- PASS: TestJSONOutput/start/Command (77.86s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.84s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-622977 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-622977 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.92s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-622977 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-622977 --output=json --user=testUser: (5.916768108s)
--- PASS: TestJSONOutput/stop/Command (5.92s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-607406 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-607406 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.801024ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"32fdb466-080c-4a79-a96c-b2e0d9f07dea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-607406] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e80921c8-360f-4fc5-96f8-4d80566a67fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17375"}}
	{"specversion":"1.0","id":"45d8adb4-6901-4199-bc78-5a3357bd267d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"67285950-61ae-45d6-b9f5-65f3a5bf9c58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig"}}
	{"specversion":"1.0","id":"64f62caf-5b3e-4357-bc7b-dab8589a1f9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube"}}
	{"specversion":"1.0","id":"fb9b8c2d-0c11-4c34-99ff-989c155c34d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"0eb688ba-ce7c-40f0-a9de-a34a1f576347","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c073fb78-f5bf-4171-a6ef-5f90e3c528bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-607406" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-607406
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-472391 --network=
E1009 23:13:55.599834 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
E1009 23:14:12.469320 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
E1009 23:14:12.474619 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
E1009 23:14:12.484888 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
E1009 23:14:12.505126 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
E1009 23:14:12.545396 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
E1009 23:14:12.625702 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
E1009 23:14:12.786067 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
E1009 23:14:13.106597 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
E1009 23:14:13.747551 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
E1009 23:14:15.030936 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
E1009 23:14:17.591253 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
E1009 23:14:22.711490 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-472391 --network=: (42.427124533s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-472391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-472391
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-472391: (2.097470762s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.55s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (41.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-157493 --network=bridge
E1009 23:14:32.952341 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
E1009 23:14:53.433280 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-157493 --network=bridge: (39.250951674s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-157493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-157493
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-157493: (2.040587403s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (41.32s)

                                                
                                    
x
+
TestKicExistingNetwork (37.39s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-561641 --network=existing-network
E1009 23:15:34.393482 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-561641 --network=existing-network: (35.106572111s)
helpers_test.go:175: Cleaning up "existing-network-561641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-561641
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-561641: (2.115268918s)
--- PASS: TestKicExistingNetwork (37.39s)

                                                
                                    
x
+
TestKicCustomSubnet (35.83s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-920490 --subnet=192.168.60.0/24
E1009 23:16:11.759293 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-920490 --subnet=192.168.60.0/24: (33.597024741s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-920490 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-920490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-920490
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-920490: (2.198419349s)
--- PASS: TestKicCustomSubnet (35.83s)

                                                
                                    
x
+
TestKicStaticIP (33.89s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-172586 --static-ip=192.168.200.200
E1009 23:16:39.442223 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-172586 --static-ip=192.168.200.200: (31.622519467s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-172586 ip
helpers_test.go:175: Cleaning up "static-ip-172586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-172586
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-172586: (2.085281463s)
--- PASS: TestKicStaticIP (33.89s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (74.27s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-868717 --driver=docker  --container-runtime=crio
E1009 23:16:56.314612 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-868717 --driver=docker  --container-runtime=crio: (33.932847464s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-871893 --driver=docker  --container-runtime=crio
E1009 23:17:53.340198 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-871893 --driver=docker  --container-runtime=crio: (34.890491431s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-868717
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-871893
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-871893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-871893
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-871893: (2.066030568s)
helpers_test.go:175: Cleaning up "first-868717" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-868717
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-868717: (2.018446753s)
--- PASS: TestMinikubeProfile (74.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-681714 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-681714 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.870626738s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-681714 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-683585 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-683585 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.515341499s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-683585 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-681714 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-681714 --alsologtostderr -v=5: (1.68422784s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-683585 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-683585
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-683585: (1.231610095s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.82s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-683585
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-683585: (6.817502521s)
--- PASS: TestMountStart/serial/RestartStopped (7.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-683585 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (98.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-717678 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1009 23:19:12.469141 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
E1009 23:19:16.385437 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
E1009 23:19:40.155170 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-717678 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m37.946070122s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (98.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717678 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717678 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-717678 -- rollout status deployment/busybox: (4.241566209s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717678 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717678 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717678 -- exec busybox-5bc68d56bd-2rmqx -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717678 -- exec busybox-5bc68d56bd-5q5k2 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717678 -- exec busybox-5bc68d56bd-2rmqx -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717678 -- exec busybox-5bc68d56bd-5q5k2 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717678 -- exec busybox-5bc68d56bd-2rmqx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-717678 -- exec busybox-5bc68d56bd-5q5k2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.65s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (51.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-717678 -v 3 --alsologtostderr
E1009 23:21:11.758185 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-717678 -v 3 --alsologtostderr: (50.681644745s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (51.40s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 cp testdata/cp-test.txt multinode-717678:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 ssh -n multinode-717678 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 cp multinode-717678:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1439984499/001/cp-test_multinode-717678.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 ssh -n multinode-717678 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 cp multinode-717678:/home/docker/cp-test.txt multinode-717678-m02:/home/docker/cp-test_multinode-717678_multinode-717678-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 ssh -n multinode-717678 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 ssh -n multinode-717678-m02 "sudo cat /home/docker/cp-test_multinode-717678_multinode-717678-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 cp multinode-717678:/home/docker/cp-test.txt multinode-717678-m03:/home/docker/cp-test_multinode-717678_multinode-717678-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 ssh -n multinode-717678 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 ssh -n multinode-717678-m03 "sudo cat /home/docker/cp-test_multinode-717678_multinode-717678-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 cp testdata/cp-test.txt multinode-717678-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 ssh -n multinode-717678-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 cp multinode-717678-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1439984499/001/cp-test_multinode-717678-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 ssh -n multinode-717678-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 cp multinode-717678-m02:/home/docker/cp-test.txt multinode-717678:/home/docker/cp-test_multinode-717678-m02_multinode-717678.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 ssh -n multinode-717678-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 ssh -n multinode-717678 "sudo cat /home/docker/cp-test_multinode-717678-m02_multinode-717678.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 cp multinode-717678-m02:/home/docker/cp-test.txt multinode-717678-m03:/home/docker/cp-test_multinode-717678-m02_multinode-717678-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 ssh -n multinode-717678-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 ssh -n multinode-717678-m03 "sudo cat /home/docker/cp-test_multinode-717678-m02_multinode-717678-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 cp testdata/cp-test.txt multinode-717678-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 ssh -n multinode-717678-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 cp multinode-717678-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1439984499/001/cp-test_multinode-717678-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 ssh -n multinode-717678-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 cp multinode-717678-m03:/home/docker/cp-test.txt multinode-717678:/home/docker/cp-test_multinode-717678-m03_multinode-717678.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 ssh -n multinode-717678-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 ssh -n multinode-717678 "sudo cat /home/docker/cp-test_multinode-717678-m03_multinode-717678.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 cp multinode-717678-m03:/home/docker/cp-test.txt multinode-717678-m02:/home/docker/cp-test_multinode-717678-m03_multinode-717678-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 ssh -n multinode-717678-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 ssh -n multinode-717678-m02 "sudo cat /home/docker/cp-test_multinode-717678-m03_multinode-717678-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-717678 node stop m03: (1.260464771s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-717678 status: exit status 7 (591.363117ms)

                                                
                                                
-- stdout --
	multinode-717678
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-717678-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-717678-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-717678 status --alsologtostderr: exit status 7 (591.727082ms)

                                                
                                                
-- stdout --
	multinode-717678
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-717678-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-717678-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 23:21:35.191369 1618721 out.go:296] Setting OutFile to fd 1 ...
	I1009 23:21:35.191537 1618721 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:21:35.191546 1618721 out.go:309] Setting ErrFile to fd 2...
	I1009 23:21:35.191553 1618721 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:21:35.191846 1618721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-1537865/.minikube/bin
	I1009 23:21:35.192032 1618721 out.go:303] Setting JSON to false
	I1009 23:21:35.192114 1618721 mustload.go:65] Loading cluster: multinode-717678
	I1009 23:21:35.192198 1618721 notify.go:220] Checking for updates...
	I1009 23:21:35.192542 1618721 config.go:182] Loaded profile config "multinode-717678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1009 23:21:35.192553 1618721 status.go:255] checking status of multinode-717678 ...
	I1009 23:21:35.193057 1618721 cli_runner.go:164] Run: docker container inspect multinode-717678 --format={{.State.Status}}
	I1009 23:21:35.219143 1618721 status.go:330] multinode-717678 host status = "Running" (err=<nil>)
	I1009 23:21:35.219170 1618721 host.go:66] Checking if "multinode-717678" exists ...
	I1009 23:21:35.219552 1618721 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-717678
	I1009 23:21:35.245052 1618721 host.go:66] Checking if "multinode-717678" exists ...
	I1009 23:21:35.245394 1618721 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 23:21:35.245445 1618721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678
	I1009 23:21:35.268999 1618721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34434 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678/id_rsa Username:docker}
	I1009 23:21:35.362188 1618721 ssh_runner.go:195] Run: systemctl --version
	I1009 23:21:35.368003 1618721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 23:21:35.381603 1618721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 23:21:35.451811 1618721 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:55 SystemTime:2023-10-09 23:21:35.442005015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1009 23:21:35.452444 1618721 kubeconfig.go:92] found "multinode-717678" server: "https://192.168.58.2:8443"
	I1009 23:21:35.452467 1618721 api_server.go:166] Checking apiserver status ...
	I1009 23:21:35.452515 1618721 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 23:21:35.466507 1618721 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1243/cgroup
	I1009 23:21:35.478855 1618721 api_server.go:182] apiserver freezer: "11:freezer:/docker/4263e5d8fe6b4225f635cb6100a7248d26a60b28f7521d97b02e4d683d7c37c9/crio/crio-add32a9afc223ef304a25ca6002379e287af1483a1ce3fd46f27f867a41a7735"
	I1009 23:21:35.478932 1618721 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4263e5d8fe6b4225f635cb6100a7248d26a60b28f7521d97b02e4d683d7c37c9/crio/crio-add32a9afc223ef304a25ca6002379e287af1483a1ce3fd46f27f867a41a7735/freezer.state
	I1009 23:21:35.490082 1618721 api_server.go:204] freezer state: "THAWED"
	I1009 23:21:35.490118 1618721 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1009 23:21:35.499029 1618721 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1009 23:21:35.499058 1618721 status.go:421] multinode-717678 apiserver status = Running (err=<nil>)
	I1009 23:21:35.499069 1618721 status.go:257] multinode-717678 status: &{Name:multinode-717678 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 23:21:35.499087 1618721 status.go:255] checking status of multinode-717678-m02 ...
	I1009 23:21:35.499669 1618721 cli_runner.go:164] Run: docker container inspect multinode-717678-m02 --format={{.State.Status}}
	I1009 23:21:35.518584 1618721 status.go:330] multinode-717678-m02 host status = "Running" (err=<nil>)
	I1009 23:21:35.518609 1618721 host.go:66] Checking if "multinode-717678-m02" exists ...
	I1009 23:21:35.518911 1618721 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-717678-m02
	I1009 23:21:35.538350 1618721 host.go:66] Checking if "multinode-717678-m02" exists ...
	I1009 23:21:35.538717 1618721 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 23:21:35.538774 1618721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-717678-m02
	I1009 23:21:35.558294 1618721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34439 SSHKeyPath:/home/jenkins/minikube-integration/17375-1537865/.minikube/machines/multinode-717678-m02/id_rsa Username:docker}
	I1009 23:21:35.653854 1618721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 23:21:35.667907 1618721 status.go:257] multinode-717678-m02 status: &{Name:multinode-717678-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1009 23:21:35.667949 1618721 status.go:255] checking status of multinode-717678-m03 ...
	I1009 23:21:35.668388 1618721 cli_runner.go:164] Run: docker container inspect multinode-717678-m03 --format={{.State.Status}}
	I1009 23:21:35.688002 1618721 status.go:330] multinode-717678-m03 host status = "Stopped" (err=<nil>)
	I1009 23:21:35.688029 1618721 status.go:343] host is not running, skipping remaining checks
	I1009 23:21:35.688036 1618721 status.go:257] multinode-717678-m03 status: &{Name:multinode-717678-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-717678 node start m03 --alsologtostderr: (11.916852972s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (125.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-717678
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-717678
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-717678: (25.154692187s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-717678 --wait=true -v=8 --alsologtostderr
E1009 23:22:53.339486 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-717678 --wait=true -v=8 --alsologtostderr: (1m39.661389138s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-717678
--- PASS: TestMultiNode/serial/RestartKeepsNodes (125.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-717678 node delete m03: (4.389705308s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 stop
E1009 23:24:12.469251 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-717678 stop: (24.247623174s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-717678 status: exit status 7 (118.969762ms)

                                                
                                                
-- stdout --
	multinode-717678
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-717678-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-717678 status --alsologtostderr: exit status 7 (115.419852ms)

                                                
                                                
-- stdout --
	multinode-717678
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-717678-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 23:24:23.136382 1626797 out.go:296] Setting OutFile to fd 1 ...
	I1009 23:24:23.136597 1626797 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:24:23.136623 1626797 out.go:309] Setting ErrFile to fd 2...
	I1009 23:24:23.136645 1626797 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:24:23.136932 1626797 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-1537865/.minikube/bin
	I1009 23:24:23.137149 1626797 out.go:303] Setting JSON to false
	I1009 23:24:23.137293 1626797 mustload.go:65] Loading cluster: multinode-717678
	I1009 23:24:23.137394 1626797 notify.go:220] Checking for updates...
	I1009 23:24:23.137786 1626797 config.go:182] Loaded profile config "multinode-717678": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1009 23:24:23.137807 1626797 status.go:255] checking status of multinode-717678 ...
	I1009 23:24:23.138369 1626797 cli_runner.go:164] Run: docker container inspect multinode-717678 --format={{.State.Status}}
	I1009 23:24:23.156642 1626797 status.go:330] multinode-717678 host status = "Stopped" (err=<nil>)
	I1009 23:24:23.156691 1626797 status.go:343] host is not running, skipping remaining checks
	I1009 23:24:23.156699 1626797 status.go:257] multinode-717678 status: &{Name:multinode-717678 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 23:24:23.156728 1626797 status.go:255] checking status of multinode-717678-m02 ...
	I1009 23:24:23.157049 1626797 cli_runner.go:164] Run: docker container inspect multinode-717678-m02 --format={{.State.Status}}
	I1009 23:24:23.175017 1626797 status.go:330] multinode-717678-m02 host status = "Stopped" (err=<nil>)
	I1009 23:24:23.175040 1626797 status.go:343] host is not running, skipping remaining checks
	I1009 23:24:23.175048 1626797 status.go:257] multinode-717678-m02 status: &{Name:multinode-717678-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (78.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-717678 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-717678 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m17.846948834s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-717678 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (78.70s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-717678
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-717678-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-717678-m02 --driver=docker  --container-runtime=crio: exit status 14 (141.67673ms)

                                                
                                                
-- stdout --
	* [multinode-717678-m02] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-717678-m02' is duplicated with machine name 'multinode-717678-m02' in profile 'multinode-717678'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-717678-m03 --driver=docker  --container-runtime=crio
E1009 23:26:11.758934 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-717678-m03 --driver=docker  --container-runtime=crio: (34.810426145s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-717678
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-717678: exit status 80 (372.397419ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-717678
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-717678-m03 already exists in multinode-717678-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-717678-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-717678-m03: (1.999264156s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.41s)

                                                
                                    
x
+
TestPreload (181.52s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-604354 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1009 23:27:34.802912 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-604354 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m26.00262138s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-604354 image pull gcr.io/k8s-minikube/busybox
E1009 23:27:53.340336 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-604354 image pull gcr.io/k8s-minikube/busybox: (2.636904635s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-604354
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-604354: (6.119715171s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-604354 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1009 23:29:12.468915 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-604354 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m24.082171649s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-604354 image list
helpers_test.go:175: Cleaning up "test-preload-604354" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-604354
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-604354: (2.395657315s)
--- PASS: TestPreload (181.52s)

                                                
                                    
x
+
TestScheduledStopUnix (110.92s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-406285 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-406285 --memory=2048 --driver=docker  --container-runtime=crio: (34.170348s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-406285 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-406285 -n scheduled-stop-406285
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-406285 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-406285 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-406285 -n scheduled-stop-406285
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-406285
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-406285 --schedule 15s
E1009 23:30:35.515435 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1009 23:31:11.758865 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-406285
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-406285: exit status 7 (92.90385ms)

                                                
                                                
-- stdout --
	scheduled-stop-406285
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-406285 -n scheduled-stop-406285
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-406285 -n scheduled-stop-406285: exit status 7 (91.788918ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-406285" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-406285
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-406285: (4.870480063s)
--- PASS: TestScheduledStopUnix (110.92s)

                                                
                                    
x
+
TestInsufficientStorage (11.52s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-049368 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-049368 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.925683903s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c54902c5-a386-4a5d-9a00-ab149c8e9fc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-049368] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ceaa2506-cd10-460a-bfc4-146a5215b340","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17375"}}
	{"specversion":"1.0","id":"8069eaeb-b3ec-43aa-af60-09788c1006b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"57063f92-f178-4513-812e-dc2f1857d405","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig"}}
	{"specversion":"1.0","id":"8bb62fb1-7952-4987-b253-a5675ae993ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube"}}
	{"specversion":"1.0","id":"a602ebf0-89e9-4016-b2f2-703a8fa73636","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"da804ea3-38ab-4c54-91dd-be2d11e71ac2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3d110ad7-71d2-4d32-a7f3-5bda6d5f7fca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c3ececd8-8455-431b-b5ab-f20e9423a014","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c61a4714-5f82-43ac-b0d5-85f00072d27d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"aaa71d60-c35d-4f90-9985-91526e14bafe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b2f7d70f-e9c9-4b0a-a6c5-e5c7ed3bb026","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-049368 in cluster insufficient-storage-049368","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4165056c-e7b8-4784-af7e-55536bcd9c84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"665f41ee-9945-4bba-9fc5-c24aeb0a3315","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2b622e42-5260-46b9-aa07-3fc69727490d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-049368 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-049368 --output=json --layout=cluster: exit status 7 (325.929054ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-049368","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-049368","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 23:31:27.499895 1643834 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-049368" does not appear in /home/jenkins/minikube-integration/17375-1537865/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-049368 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-049368 --output=json --layout=cluster: exit status 7 (341.071125ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-049368","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-049368","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 23:31:27.842094 1643888 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-049368" does not appear in /home/jenkins/minikube-integration/17375-1537865/kubeconfig
	E1009 23:31:27.855042 1643888 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/insufficient-storage-049368/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-049368" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-049368
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-049368: (1.929000776s)
--- PASS: TestInsufficientStorage (11.52s)

                                                
                                    
x
+
TestKubernetesUpgrade (381.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-637449 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-637449 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m8.653800891s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-637449
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-637449: (2.507644185s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-637449 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-637449 status --format={{.Host}}: exit status 7 (98.052503ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-637449 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1009 23:34:12.468654 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-637449 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m40.797188175s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-637449 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-637449 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-637449 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (185.284031ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-637449] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-637449
	    minikube start -p kubernetes-upgrade-637449 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6374492 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-637449 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-637449 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1009 23:39:12.469015 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-637449 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.581700793s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-637449" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-637449
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-637449: (2.322409908s)
--- PASS: TestKubernetesUpgrade (381.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-349860 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-349860 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (107.877436ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-349860] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-349860 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-349860 --driver=docker  --container-runtime=crio: (43.125901514s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-349860 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-349860 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-349860 --no-kubernetes --driver=docker  --container-runtime=crio: (15.54049862s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-349860 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-349860 status -o json: exit status 2 (439.753208ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-349860","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-349860
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-349860: (2.191438781s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-349860 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-349860 --no-kubernetes --driver=docker  --container-runtime=crio: (9.810198356s)
--- PASS: TestNoKubernetes/serial/Start (9.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-349860 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-349860 "sudo systemctl is-active --quiet service kubelet": exit status 1 (504.689225ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-349860
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-349860: (1.29510306s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-349860 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-349860 --driver=docker  --container-runtime=crio: (7.738153069s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-349860 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-349860 "sudo systemctl is-active --quiet service kubelet": exit status 1 (437.985539ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.28s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.28s)

                                                
                                    
x
+
TestPause/serial/Start (52.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-078272 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1009 23:41:11.758466 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-078272 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (52.013209991s)
--- PASS: TestPause/serial/Start (52.02s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (42.61s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-078272 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-078272 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.584015901s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (42.61s)

                                                
                                    
x
+
TestPause/serial/Pause (1.1s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-078272 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-078272 --alsologtostderr -v=5: (1.10256409s)
--- PASS: TestPause/serial/Pause (1.10s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-078272 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-078272 --output=json --layout=cluster: exit status 2 (453.863336ms)

                                                
                                                
-- stdout --
	{"Name":"pause-078272","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-078272","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.45s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.02s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-078272 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-078272 --alsologtostderr -v=5: (1.015790959s)
--- PASS: TestPause/serial/Unpause (1.02s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.32s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-078272 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-078272 --alsologtostderr -v=5: (1.317264692s)
--- PASS: TestPause/serial/PauseAgain (1.32s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.13s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-078272 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-078272 --alsologtostderr -v=5: (3.133432634s)
--- PASS: TestPause/serial/DeletePaused (3.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-722136 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-722136 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (394.144341ms)

                                                
                                                
-- stdout --
	* [false-722136] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17375
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 23:42:14.086167 1680973 out.go:296] Setting OutFile to fd 1 ...
	I1009 23:42:14.086418 1680973 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:42:14.086444 1680973 out.go:309] Setting ErrFile to fd 2...
	I1009 23:42:14.086464 1680973 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1009 23:42:14.086758 1680973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17375-1537865/.minikube/bin
	I1009 23:42:14.087268 1680973 out.go:303] Setting JSON to false
	I1009 23:42:14.088323 1680973 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":26677,"bootTime":1696868257,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1047-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1009 23:42:14.090372 1680973 start.go:138] virtualization:  
	I1009 23:42:14.096205 1680973 out.go:177] * [false-722136] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I1009 23:42:14.098452 1680973 out.go:177]   - MINIKUBE_LOCATION=17375
	I1009 23:42:14.100236 1680973 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 23:42:14.098555 1680973 notify.go:220] Checking for updates...
	I1009 23:42:14.105120 1680973 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17375-1537865/kubeconfig
	I1009 23:42:14.107830 1680973 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17375-1537865/.minikube
	I1009 23:42:14.110366 1680973 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1009 23:42:14.112638 1680973 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 23:42:14.115534 1680973 config.go:182] Loaded profile config "pause-078272": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I1009 23:42:14.115674 1680973 driver.go:378] Setting default libvirt URI to qemu:///system
	I1009 23:42:14.169107 1680973 docker.go:122] docker version: linux-24.0.6:Docker Engine - Community
	I1009 23:42:14.169209 1680973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 23:42:14.307407 1680973 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-10-09 23:42:14.296405485 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1047-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1009 23:42:14.307513 1680973 docker.go:295] overlay module found
	I1009 23:42:14.309952 1680973 out.go:177] * Using the docker driver based on user configuration
	I1009 23:42:14.312375 1680973 start.go:298] selected driver: docker
	I1009 23:42:14.312392 1680973 start.go:902] validating driver "docker" against <nil>
	I1009 23:42:14.312405 1680973 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 23:42:14.315058 1680973 out.go:177] 
	W1009 23:42:14.317461 1680973 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1009 23:42:14.319326 1680973 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-722136 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-722136

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-722136

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-722136

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-722136

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-722136

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-722136

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-722136

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-722136

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-722136

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-722136

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-722136

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-722136" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-722136" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-722136

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-722136"

                                                
                                                
----------------------- debugLogs end: false-722136 [took: 5.186404263s] --------------------------------
helpers_test.go:175: Cleaning up "false-722136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-722136
--- PASS: TestNetworkPlugins/group/false (5.78s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.2s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-078272
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-078272: exit status 1 (32.139975ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-078272: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (115.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-775584 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1009 23:44:12.468338 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
E1009 23:44:14.804083 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-775584 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (1m55.790962665s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (115.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-775584 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [508c73dc-a863-40dd-b574-49768f8efbcf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [508c73dc-a863-40dd-b574-49768f8efbcf] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.030452528s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-775584 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-775584 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-775584 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.019119044s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-775584 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-775584 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-775584 --alsologtostderr -v=3: (12.163708172s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-775584 -n old-k8s-version-775584
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-775584 -n old-k8s-version-775584: exit status 7 (97.582331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-775584 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (419.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-775584 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-775584 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (6m58.804997843s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-775584 -n old-k8s-version-775584
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (419.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (65.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-962523 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1009 23:47:15.516346 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
E1009 23:47:53.339795 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-962523 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (1m5.225886837s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (65.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-962523 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5955da57-faef-4570-9987-e9131a0e2710] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5955da57-faef-4570-9987-e9131a0e2710] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.036356324s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-962523 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-962523 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-962523 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.111555089s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-962523 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-962523 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-962523 --alsologtostderr -v=3: (12.105608004s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-962523 -n no-preload-962523
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-962523 -n no-preload-962523: exit status 7 (99.198494ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-962523 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (351.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-962523 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1009 23:49:12.469234 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
E1009 23:51:11.758236 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
E1009 23:52:36.385917 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
E1009 23:52:53.339512 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-962523 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (5m50.851444125s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-962523 -n no-preload-962523
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (351.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-6nzch" [94112c27-eabf-450b-b33d-cb8e5828f546] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.024534789s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-6nzch" [94112c27-eabf-450b-b33d-cb8e5828f546] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009140514s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-775584 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-775584 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-775584 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-775584 -n old-k8s-version-775584
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-775584 -n old-k8s-version-775584: exit status 2 (384.831045ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-775584 -n old-k8s-version-775584
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-775584 -n old-k8s-version-775584: exit status 2 (369.521692ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-775584 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-775584 -n old-k8s-version-775584
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-775584 -n old-k8s-version-775584
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (81.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-145082 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1009 23:54:12.469126 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-145082 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (1m21.177041746s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (81.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kd4r7" [418c707e-57dd-4d90-85a6-ab74edbd8471] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kd4r7" [418c707e-57dd-4d90-85a6-ab74edbd8471] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.036402264s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-145082 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7eaf903a-7ae1-4f1a-8115-47b846924d8a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7eaf903a-7ae1-4f1a-8115-47b846924d8a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.042746677s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-145082 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kd4r7" [418c707e-57dd-4d90-85a6-ab74edbd8471] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013528935s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-962523 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-962523 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-962523 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-962523 -n no-preload-962523
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-962523 -n no-preload-962523: exit status 2 (363.118583ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-962523 -n no-preload-962523
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-962523 -n no-preload-962523: exit status 2 (378.401877ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-962523 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-962523 -n no-preload-962523
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-962523 -n no-preload-962523
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-145082 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-145082 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.910512692s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-145082 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-145082 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-145082 --alsologtostderr -v=3: (12.25441242s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (77.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-477084 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-477084 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (1m17.507036687s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (77.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-145082 -n embed-certs-145082
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-145082 -n embed-certs-145082: exit status 7 (88.009152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-145082 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (355.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-145082 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1009 23:55:34.295304 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
E1009 23:55:34.300944 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
E1009 23:55:34.311110 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
E1009 23:55:34.331371 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
E1009 23:55:34.371634 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
E1009 23:55:34.452043 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
E1009 23:55:34.616861 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
E1009 23:55:34.937314 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
E1009 23:55:35.578378 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
E1009 23:55:36.858698 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
E1009 23:55:39.418937 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
E1009 23:55:44.539665 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
E1009 23:55:54.780698 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-145082 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (5m55.146134862s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-145082 -n embed-certs-145082
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (355.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-477084 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [54b06acd-4265-4e19-b24a-77c115aec072] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1009 23:56:11.758977 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
helpers_test.go:344: "busybox" [54b06acd-4265-4e19-b24a-77c115aec072] Running
E1009 23:56:15.260852 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.03042883s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-477084 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-477084 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-477084 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.145443752s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-477084 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-477084 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-477084 --alsologtostderr -v=3: (12.132690768s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-477084 -n default-k8s-diff-port-477084
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-477084 -n default-k8s-diff-port-477084: exit status 7 (91.136941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-477084 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (628.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-477084 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E1009 23:56:56.222550 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
E1009 23:57:53.340307 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
E1009 23:58:10.669544 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/no-preload-962523/client.crt: no such file or directory
E1009 23:58:10.674937 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/no-preload-962523/client.crt: no such file or directory
E1009 23:58:10.685253 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/no-preload-962523/client.crt: no such file or directory
E1009 23:58:10.705575 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/no-preload-962523/client.crt: no such file or directory
E1009 23:58:10.745918 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/no-preload-962523/client.crt: no such file or directory
E1009 23:58:10.826298 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/no-preload-962523/client.crt: no such file or directory
E1009 23:58:10.986678 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/no-preload-962523/client.crt: no such file or directory
E1009 23:58:11.306890 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/no-preload-962523/client.crt: no such file or directory
E1009 23:58:11.947986 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/no-preload-962523/client.crt: no such file or directory
E1009 23:58:13.228607 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/no-preload-962523/client.crt: no such file or directory
E1009 23:58:15.789627 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/no-preload-962523/client.crt: no such file or directory
E1009 23:58:18.143271 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
E1009 23:58:20.910826 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/no-preload-962523/client.crt: no such file or directory
E1009 23:58:31.151096 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/no-preload-962523/client.crt: no such file or directory
E1009 23:58:51.631582 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/no-preload-962523/client.crt: no such file or directory
E1009 23:59:12.469292 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
E1009 23:59:32.592304 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/no-preload-962523/client.crt: no such file or directory
E1010 00:00:34.296128 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
E1010 00:00:54.512492 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/no-preload-962523/client.crt: no such file or directory
E1010 00:00:54.804954 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-477084 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (10m28.051361552s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-477084 -n default-k8s-diff-port-477084
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (628.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6dnhf" [41a92152-d7e4-46ae-8439-ef512356dd27] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1010 00:01:01.983937 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6dnhf" [41a92152-d7e4-46ae-8439-ef512356dd27] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.028325986s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (14.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-6dnhf" [41a92152-d7e4-46ae-8439-ef512356dd27] Running
E1010 00:01:11.758909 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/functional-634060/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013081369s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-145082 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-145082 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-145082 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-145082 -n embed-certs-145082
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-145082 -n embed-certs-145082: exit status 2 (388.519664ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-145082 -n embed-certs-145082
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-145082 -n embed-certs-145082: exit status 2 (385.661252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-145082 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-145082 -n embed-certs-145082
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-145082 -n embed-certs-145082
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (51.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-840277 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-840277 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (51.906842535s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (51.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-840277 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-840277 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.446121076s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-840277 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-840277 --alsologtostderr -v=3: (1.437964079s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-840277 -n newest-cni-840277
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-840277 -n newest-cni-840277: exit status 7 (156.991825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-840277 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (32.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-840277 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-840277 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (32.033996328s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-840277 -n newest-cni-840277
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (32.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-840277 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-840277 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-840277 -n newest-cni-840277
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-840277 -n newest-cni-840277: exit status 2 (368.837607ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-840277 -n newest-cni-840277
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-840277 -n newest-cni-840277: exit status 2 (398.678017ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-840277 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-840277 -n newest-cni-840277
E1010 00:02:53.339487 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-840277 -n newest-cni-840277
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-722136 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1010 00:03:10.669271 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/no-preload-962523/client.crt: no such file or directory
E1010 00:03:38.353036 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/no-preload-962523/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-722136 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (52.330108219s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-722136 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-722136 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dkx4v" [fe28390b-579f-4101-8877-4e14b10f8ab6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-dkx4v" [fe28390b-579f-4101-8877-4e14b10f8ab6] Running
E1010 00:03:55.516833 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/ingress-addon-legacy-789037/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.011073386s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-722136 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-722136 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-722136 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (78.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-722136 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1010 00:05:34.296349 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-722136 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m18.935266073s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (78.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-r7dgq" [0817c64f-ca3b-44f1-8c08-fe8db9bd957d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.039745248s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-722136 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-722136 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-t2zbk" [2facfa2b-fbe4-4b9b-8d4b-093d5c73d1b7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-t2zbk" [2facfa2b-fbe4-4b9b-8d4b-093d5c73d1b7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.011541179s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-722136 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-722136 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-722136 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-722136 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-722136 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m16.899620077s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7twst" [eeab3c7b-291c-4635-822a-51ac8ac1cb41] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.027886214s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-7twst" [eeab3c7b-291c-4635-822a-51ac8ac1cb41] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.034685374s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-477084 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-477084 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-477084 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-477084 --alsologtostderr -v=1: (1.352009863s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-477084 -n default-k8s-diff-port-477084
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-477084 -n default-k8s-diff-port-477084: exit status 2 (598.4919ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-477084 -n default-k8s-diff-port-477084
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-477084 -n default-k8s-diff-port-477084: exit status 2 (550.633967ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-477084 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-477084 --alsologtostderr -v=1: (1.123515016s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-477084 -n default-k8s-diff-port-477084
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-477084 -n default-k8s-diff-port-477084
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.18s)
E1010 00:11:18.446230 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/default-k8s-diff-port-477084/client.crt: no such file or directory
E1010 00:11:23.852179 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/kindnet-722136/client.crt: no such file or directory
E1010 00:11:28.686971 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/default-k8s-diff-port-477084/client.crt: no such file or directory
E1010 00:11:33.578394 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/auto-722136/client.crt: no such file or directory
E1010 00:11:49.167365 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/default-k8s-diff-port-477084/client.crt: no such file or directory
E1010 00:11:57.344855 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
E1010 00:12:04.812963 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/kindnet-722136/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (76.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-722136 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-722136 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m16.603558637s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (76.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-62fq5" [0c697448-beba-44f7-8472-412131ba62be] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.050990157s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-722136 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-722136 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lsx5m" [b953ca51-f5eb-4e75-9737-16841b1331af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1010 00:07:53.340022 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/addons-749116/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-lsx5m" [b953ca51-f5eb-4e75-9737-16841b1331af] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.018381108s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-722136 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-722136 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-722136 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (88.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-722136 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-722136 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m28.425297826s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (88.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-722136 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-722136 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-jmxsz" [32ca427f-7858-4369-bc09-a35b01109423] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-jmxsz" [32ca427f-7858-4369-bc09-a35b01109423] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.017172882s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-722136 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-722136 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-722136 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-722136 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1010 00:09:30.697516 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/auto-722136/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-722136 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m11.941807815s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-722136 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-722136 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hgc2v" [ddc502c7-6a7f-4191-8ffa-0f035354245d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hgc2v" [ddc502c7-6a7f-4191-8ffa-0f035354245d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.011570297s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-722136 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-722136 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-722136 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-4pdj9" [deef27f1-d039-4da6-8bc6-61e8e6253fc5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.037812735s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (95.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-722136 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-722136 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m35.270636723s)
--- PASS: TestNetworkPlugins/group/bridge/Start (95.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-722136 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-722136 replace --force -f testdata/netcat-deployment.yaml
E1010 00:10:34.295701 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/old-k8s-version-775584/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8jlwn" [45d71731-8762-4b22-8cae-da4f5a8d8213] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8jlwn" [45d71731-8762-4b22-8cae-da4f5a8d8213] Running
E1010 00:10:42.886531 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/kindnet-722136/client.crt: no such file or directory
E1010 00:10:42.891950 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/kindnet-722136/client.crt: no such file or directory
E1010 00:10:42.902272 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/kindnet-722136/client.crt: no such file or directory
E1010 00:10:42.922888 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/kindnet-722136/client.crt: no such file or directory
E1010 00:10:42.963151 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/kindnet-722136/client.crt: no such file or directory
E1010 00:10:43.043448 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/kindnet-722136/client.crt: no such file or directory
E1010 00:10:43.203813 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/kindnet-722136/client.crt: no such file or directory
E1010 00:10:43.524864 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/kindnet-722136/client.crt: no such file or directory
E1010 00:10:44.165446 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/kindnet-722136/client.crt: no such file or directory
E1010 00:10:45.446121 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/kindnet-722136/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.014143045s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-722136 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-722136 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-722136 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1010 00:10:48.010051 1543215 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/kindnet-722136/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-722136 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-722136 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9ww97" [f0878124-2656-4670-b378-3d81ebe55c0a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9ww97" [f0878124-2656-4670-b378-3d81ebe55c0a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.014253067s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-722136 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-722136 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-722136 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    

Test skip (29/308)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.65s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-066198 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-066198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-066198
--- SKIP: TestDownloadOnlyKic (0.65s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-548986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-548986
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-722136 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-722136

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-722136

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-722136

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-722136

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-722136

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-722136

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-722136

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-722136

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-722136

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-722136

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-722136

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-722136" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-722136" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17375-1537865/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 09 Oct 2023 23:41:59 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: pause-078272
contexts:
- context:
cluster: pause-078272
extensions:
- extension:
last-update: Mon, 09 Oct 2023 23:41:59 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: pause-078272
name: pause-078272
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-078272
user:
client-certificate: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/client.crt
client-key: /home/jenkins/minikube-integration/17375-1537865/.minikube/profiles/pause-078272/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-722136

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-722136"

                                                
                                                
----------------------- debugLogs end: kubenet-722136 [took: 5.55871688s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-722136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-722136
--- SKIP: TestNetworkPlugins/group/kubenet (5.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-722136 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-722136

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-722136

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-722136

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-722136

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-722136

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-722136

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-722136

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-722136

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-722136

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-722136

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-722136

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-722136" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-722136

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-722136

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-722136

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-722136

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-722136" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-722136" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-722136

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-722136" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-722136"

                                                
                                                
----------------------- debugLogs end: cilium-722136 [took: 6.557911605s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-722136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-722136
--- SKIP: TestNetworkPlugins/group/cilium (6.79s)

                                                
                                    
Copied to clipboard