Test Report: Docker_Linux_crio_arm64 17263

                    
                      9c7b220a3b46302c250803ffb8def25eadaf0a12:2023-09-18:31068
                    
                

Test fail (7/304)

Order failed test Duration
25 TestAddons/parallel/Ingress 168.02
31 TestAddons/parallel/Headlamp 3.63
154 TestIngressAddonLegacy/serial/ValidateIngressAddons 180.42
204 TestMultiNode/serial/PingHostFrom2Pods 4.98
225 TestRunningBinaryUpgrade 69.76
228 TestMissingContainerUpgrade 172.61
249 TestStoppedBinaryUpgrade/Upgrade 78.35
x
+
TestAddons/parallel/Ingress (168.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-351470 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-351470 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-351470 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a9f88e83-1253-48d8-aaad-c662953fe3c6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a9f88e83-1253-48d8-aaad-c662953fe3c6] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.02245357s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p addons-351470 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-351470 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.786028759s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-351470 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p addons-351470 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.050130064s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p addons-351470 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p addons-351470 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p addons-351470 addons disable ingress --alsologtostderr -v=1: (8.130118455s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-351470
helpers_test.go:235: (dbg) docker inspect addons-351470:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b54c6ee76a06eb4c585cf18003ceeccee467f7f9e95cd51bbc7284a6ae81c0e",
	        "Created": "2023-09-18T18:55:37.672543127Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 648961,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-18T18:55:37.995599906Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:560a33002deec07a703a16e2b1dbf6aecde4c0d46aaefa1cb6df4c8c8a7774a7",
	        "ResolvConfPath": "/var/lib/docker/containers/2b54c6ee76a06eb4c585cf18003ceeccee467f7f9e95cd51bbc7284a6ae81c0e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b54c6ee76a06eb4c585cf18003ceeccee467f7f9e95cd51bbc7284a6ae81c0e/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b54c6ee76a06eb4c585cf18003ceeccee467f7f9e95cd51bbc7284a6ae81c0e/hosts",
	        "LogPath": "/var/lib/docker/containers/2b54c6ee76a06eb4c585cf18003ceeccee467f7f9e95cd51bbc7284a6ae81c0e/2b54c6ee76a06eb4c585cf18003ceeccee467f7f9e95cd51bbc7284a6ae81c0e-json.log",
	        "Name": "/addons-351470",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-351470:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-351470",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/302d4128170298c9a49dfc6c566ed77fe8ec771cd64821ba9f1f3dc979ecd671-init/diff:/var/lib/docker/overlay2/4e03e4714bce8b0ad83859c0e431c5abac0520d3520e787a29bac63ee8779cc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/302d4128170298c9a49dfc6c566ed77fe8ec771cd64821ba9f1f3dc979ecd671/merged",
	                "UpperDir": "/var/lib/docker/overlay2/302d4128170298c9a49dfc6c566ed77fe8ec771cd64821ba9f1f3dc979ecd671/diff",
	                "WorkDir": "/var/lib/docker/overlay2/302d4128170298c9a49dfc6c566ed77fe8ec771cd64821ba9f1f3dc979ecd671/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-351470",
	                "Source": "/var/lib/docker/volumes/addons-351470/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-351470",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-351470",
	                "name.minikube.sigs.k8s.io": "addons-351470",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6be29c99e8fe7b80b985892e859f8abb52f6b9e392f2d2e0b40a201bfaf362d7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33411"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33412"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6be29c99e8fe",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-351470": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2b54c6ee76a0",
	                        "addons-351470"
	                    ],
	                    "NetworkID": "c52a98ebb1827bd9b5c5e2fd668d96c6487b504e8c475a0cff92e03a24d9fcd2",
	                    "EndpointID": "28e6915999bc31941809b2377f905912b998922a2caca62098462683a86f52d9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-351470 -n addons-351470
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-351470 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-351470 logs -n 25: (1.663069812s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-623514   | jenkins | v1.31.2 | 18 Sep 23 18:54 UTC |                     |
	|         | -p download-only-623514        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-623514   | jenkins | v1.31.2 | 18 Sep 23 18:54 UTC |                     |
	|         | -p download-only-623514        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.31.2 | 18 Sep 23 18:55 UTC | 18 Sep 23 18:55 UTC |
	| delete  | -p download-only-623514        | download-only-623514   | jenkins | v1.31.2 | 18 Sep 23 18:55 UTC | 18 Sep 23 18:55 UTC |
	| delete  | -p download-only-623514        | download-only-623514   | jenkins | v1.31.2 | 18 Sep 23 18:55 UTC | 18 Sep 23 18:55 UTC |
	| start   | --download-only -p             | download-docker-150608 | jenkins | v1.31.2 | 18 Sep 23 18:55 UTC |                     |
	|         | download-docker-150608         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-150608      | download-docker-150608 | jenkins | v1.31.2 | 18 Sep 23 18:55 UTC | 18 Sep 23 18:55 UTC |
	| start   | --download-only -p             | binary-mirror-476016   | jenkins | v1.31.2 | 18 Sep 23 18:55 UTC |                     |
	|         | binary-mirror-476016           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35741         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-476016        | binary-mirror-476016   | jenkins | v1.31.2 | 18 Sep 23 18:55 UTC | 18 Sep 23 18:55 UTC |
	| start   | -p addons-351470               | addons-351470          | jenkins | v1.31.2 | 18 Sep 23 18:55 UTC | 18 Sep 23 18:58 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-351470          | jenkins | v1.31.2 | 18 Sep 23 18:58 UTC | 18 Sep 23 18:58 UTC |
	|         | addons-351470                  |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-351470          | jenkins | v1.31.2 | 18 Sep 23 18:58 UTC |                     |
	|         | -p addons-351470               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-351470 ip               | addons-351470          | jenkins | v1.31.2 | 18 Sep 23 18:58 UTC | 18 Sep 23 18:58 UTC |
	| addons  | addons-351470 addons disable   | addons-351470          | jenkins | v1.31.2 | 18 Sep 23 18:58 UTC | 18 Sep 23 18:58 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| ssh     | addons-351470 ssh curl -s      | addons-351470          | jenkins | v1.31.2 | 18 Sep 23 18:58 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| addons  | addons-351470 addons           | addons-351470          | jenkins | v1.31.2 | 18 Sep 23 18:58 UTC | 18 Sep 23 18:58 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-351470 addons           | addons-351470          | jenkins | v1.31.2 | 18 Sep 23 18:58 UTC | 18 Sep 23 18:58 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-351470 addons           | addons-351470          | jenkins | v1.31.2 | 18 Sep 23 18:58 UTC | 18 Sep 23 18:59 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-351470          | jenkins | v1.31.2 | 18 Sep 23 18:59 UTC | 18 Sep 23 18:59 UTC |
	|         | addons-351470                  |                        |         |         |                     |                     |
	| ip      | addons-351470 ip               | addons-351470          | jenkins | v1.31.2 | 18 Sep 23 19:00 UTC | 18 Sep 23 19:00 UTC |
	| addons  | addons-351470 addons disable   | addons-351470          | jenkins | v1.31.2 | 18 Sep 23 19:00 UTC | 18 Sep 23 19:00 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-351470 addons disable   | addons-351470          | jenkins | v1.31.2 | 18 Sep 23 19:00 UTC | 18 Sep 23 19:01 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/18 18:55:14
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 18:55:14.129571  648496 out.go:296] Setting OutFile to fd 1 ...
	I0918 18:55:14.129754  648496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 18:55:14.129762  648496 out.go:309] Setting ErrFile to fd 2...
	I0918 18:55:14.129768  648496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 18:55:14.130038  648496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17263-642665/.minikube/bin
	I0918 18:55:14.130586  648496 out.go:303] Setting JSON to false
	I0918 18:55:14.131543  648496 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9460,"bootTime":1695053855,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0918 18:55:14.131622  648496 start.go:138] virtualization:  
	I0918 18:55:14.144289  648496 out.go:177] * [addons-351470] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0918 18:55:14.150729  648496 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 18:55:14.152979  648496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 18:55:14.150853  648496 notify.go:220] Checking for updates...
	I0918 18:55:14.158372  648496 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 18:55:14.160760  648496 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	I0918 18:55:14.162694  648496 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 18:55:14.165067  648496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 18:55:14.167501  648496 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 18:55:14.194602  648496 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0918 18:55:14.194724  648496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 18:55:14.291669  648496 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-09-18 18:55:14.281895585 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 18:55:14.291806  648496 docker.go:294] overlay module found
	I0918 18:55:14.295925  648496 out.go:177] * Using the docker driver based on user configuration
	I0918 18:55:14.298139  648496 start.go:298] selected driver: docker
	I0918 18:55:14.298155  648496 start.go:902] validating driver "docker" against <nil>
	I0918 18:55:14.298167  648496 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 18:55:14.298804  648496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 18:55:14.366319  648496 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-09-18 18:55:14.355335125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 18:55:14.366499  648496 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 18:55:14.366758  648496 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 18:55:14.368950  648496 out.go:177] * Using Docker driver with root privileges
	I0918 18:55:14.371169  648496 cni.go:84] Creating CNI manager for ""
	I0918 18:55:14.371195  648496 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0918 18:55:14.371207  648496 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0918 18:55:14.371218  648496 start_flags.go:321] config:
	{Name:addons-351470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-351470 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 18:55:14.373476  648496 out.go:177] * Starting control plane node addons-351470 in cluster addons-351470
	I0918 18:55:14.375461  648496 cache.go:122] Beginning downloading kic base image for docker with crio
	I0918 18:55:14.377789  648496 out.go:177] * Pulling base image ...
	I0918 18:55:14.380037  648496 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0918 18:55:14.380097  648496 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I0918 18:55:14.380110  648496 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I0918 18:55:14.380117  648496 cache.go:57] Caching tarball of preloaded images
	I0918 18:55:14.380203  648496 preload.go:174] Found /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0918 18:55:14.380213  648496 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I0918 18:55:14.380560  648496 profile.go:148] Saving config to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/config.json ...
	I0918 18:55:14.380590  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/config.json: {Name:mk3fb0408b5d9dad7821d789b87d077f5681e779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:14.397416  648496 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I0918 18:55:14.397566  648496 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory
	I0918 18:55:14.397585  648496 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory, skipping pull
	I0918 18:55:14.397591  648496 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in cache, skipping pull
	I0918 18:55:14.397599  648496 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 as a tarball
	I0918 18:55:14.397605  648496 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 from local cache
	I0918 18:55:30.433587  648496 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 from cached tarball
	I0918 18:55:30.433625  648496 cache.go:195] Successfully downloaded all kic artifacts
	I0918 18:55:30.433678  648496 start.go:365] acquiring machines lock for addons-351470: {Name:mk8c04819510b908dbe116c0bcf21061e409e05e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 18:55:30.433800  648496 start.go:369] acquired machines lock for "addons-351470" in 98.905µs
	I0918 18:55:30.433833  648496 start.go:93] Provisioning new machine with config: &{Name:addons-351470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-351470 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 18:55:30.433910  648496 start.go:125] createHost starting for "" (driver="docker")
	I0918 18:55:30.436852  648496 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0918 18:55:30.437106  648496 start.go:159] libmachine.API.Create for "addons-351470" (driver="docker")
	I0918 18:55:30.437132  648496 client.go:168] LocalClient.Create starting
	I0918 18:55:30.437263  648496 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem
	I0918 18:55:30.816732  648496 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem
	I0918 18:55:31.295906  648496 cli_runner.go:164] Run: docker network inspect addons-351470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0918 18:55:31.317826  648496 cli_runner.go:211] docker network inspect addons-351470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0918 18:55:31.317906  648496 network_create.go:281] running [docker network inspect addons-351470] to gather additional debugging logs...
	I0918 18:55:31.317927  648496 cli_runner.go:164] Run: docker network inspect addons-351470
	W0918 18:55:31.334543  648496 cli_runner.go:211] docker network inspect addons-351470 returned with exit code 1
	I0918 18:55:31.334577  648496 network_create.go:284] error running [docker network inspect addons-351470]: docker network inspect addons-351470: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-351470 not found
	I0918 18:55:31.334589  648496 network_create.go:286] output of [docker network inspect addons-351470]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-351470 not found
	
	** /stderr **
	I0918 18:55:31.334659  648496 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 18:55:31.353482  648496 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000cbd4b0}
	I0918 18:55:31.353520  648496 network_create.go:123] attempt to create docker network addons-351470 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0918 18:55:31.353575  648496 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-351470 addons-351470
	I0918 18:55:31.429109  648496 network_create.go:107] docker network addons-351470 192.168.49.0/24 created
	I0918 18:55:31.429141  648496 kic.go:117] calculated static IP "192.168.49.2" for the "addons-351470" container
	I0918 18:55:31.429223  648496 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0918 18:55:31.445718  648496 cli_runner.go:164] Run: docker volume create addons-351470 --label name.minikube.sigs.k8s.io=addons-351470 --label created_by.minikube.sigs.k8s.io=true
	I0918 18:55:31.464588  648496 oci.go:103] Successfully created a docker volume addons-351470
	I0918 18:55:31.464680  648496 cli_runner.go:164] Run: docker run --rm --name addons-351470-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-351470 --entrypoint /usr/bin/test -v addons-351470:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I0918 18:55:33.356021  648496 cli_runner.go:217] Completed: docker run --rm --name addons-351470-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-351470 --entrypoint /usr/bin/test -v addons-351470:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib: (1.891290146s)
	I0918 18:55:33.356061  648496 oci.go:107] Successfully prepared a docker volume addons-351470
	I0918 18:55:33.356080  648496 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0918 18:55:33.356099  648496 kic.go:190] Starting extracting preloaded images to volume ...
	I0918 18:55:33.356196  648496 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-351470:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I0918 18:55:37.593894  648496 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-351470:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir: (4.237642823s)
	I0918 18:55:37.593926  648496 kic.go:199] duration metric: took 4.237824 seconds to extract preloaded images to volume
	W0918 18:55:37.594066  648496 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0918 18:55:37.594187  648496 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0918 18:55:37.653849  648496 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-351470 --name addons-351470 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-351470 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-351470 --network addons-351470 --ip 192.168.49.2 --volume addons-351470:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I0918 18:55:38.014403  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Running}}
	I0918 18:55:38.044380  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:55:38.068104  648496 cli_runner.go:164] Run: docker exec addons-351470 stat /var/lib/dpkg/alternatives/iptables
	I0918 18:55:38.136690  648496 oci.go:144] the created container "addons-351470" has a running status.
	I0918 18:55:38.136722  648496 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa...
	I0918 18:55:38.281384  648496 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0918 18:55:38.312034  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:55:38.342435  648496 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0918 18:55:38.342466  648496 kic_runner.go:114] Args: [docker exec --privileged addons-351470 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0918 18:55:38.420593  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:55:38.450551  648496 machine.go:88] provisioning docker machine ...
	I0918 18:55:38.450580  648496 ubuntu.go:169] provisioning hostname "addons-351470"
	I0918 18:55:38.450645  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:55:38.477310  648496 main.go:141] libmachine: Using SSH client type: native
	I0918 18:55:38.477735  648496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33415 <nil> <nil>}
	I0918 18:55:38.477747  648496 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-351470 && echo "addons-351470" | sudo tee /etc/hostname
	I0918 18:55:38.478334  648496 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38154->127.0.0.1:33415: read: connection reset by peer
	I0918 18:55:41.633096  648496 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-351470
	
	I0918 18:55:41.633206  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:55:41.652286  648496 main.go:141] libmachine: Using SSH client type: native
	I0918 18:55:41.652701  648496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33415 <nil> <nil>}
	I0918 18:55:41.652718  648496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-351470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-351470/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-351470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 18:55:41.793034  648496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 18:55:41.793104  648496 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17263-642665/.minikube CaCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17263-642665/.minikube}
	I0918 18:55:41.793139  648496 ubuntu.go:177] setting up certificates
	I0918 18:55:41.793177  648496 provision.go:83] configureAuth start
	I0918 18:55:41.793285  648496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-351470
	I0918 18:55:41.811472  648496 provision.go:138] copyHostCerts
	I0918 18:55:41.811552  648496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem (1082 bytes)
	I0918 18:55:41.811684  648496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem (1123 bytes)
	I0918 18:55:41.811754  648496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem (1675 bytes)
	I0918 18:55:41.811838  648496 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem org=jenkins.addons-351470 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-351470]
	I0918 18:55:42.525302  648496 provision.go:172] copyRemoteCerts
	I0918 18:55:42.525371  648496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 18:55:42.525416  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:55:42.547235  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:55:42.646651  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 18:55:42.676674  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0918 18:55:42.707870  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 18:55:42.736786  648496 provision.go:86] duration metric: configureAuth took 943.576423ms
	I0918 18:55:42.736814  648496 ubuntu.go:193] setting minikube options for container-runtime
	I0918 18:55:42.736998  648496 config.go:182] Loaded profile config "addons-351470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0918 18:55:42.737110  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:55:42.755826  648496 main.go:141] libmachine: Using SSH client type: native
	I0918 18:55:42.756240  648496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33415 <nil> <nil>}
	I0918 18:55:42.756271  648496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 18:55:43.015456  648496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 18:55:43.015482  648496 machine.go:91] provisioned docker machine in 4.564911182s
	I0918 18:55:43.015492  648496 client.go:171] LocalClient.Create took 12.578354739s
	I0918 18:55:43.015503  648496 start.go:167] duration metric: libmachine.API.Create for "addons-351470" took 12.578400089s
	I0918 18:55:43.015511  648496 start.go:300] post-start starting for "addons-351470" (driver="docker")
	I0918 18:55:43.015521  648496 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 18:55:43.015603  648496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 18:55:43.015653  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:55:43.037893  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:55:43.139152  648496 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 18:55:43.143365  648496 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0918 18:55:43.143400  648496 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0918 18:55:43.143412  648496 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0918 18:55:43.143420  648496 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0918 18:55:43.143431  648496 filesync.go:126] Scanning /home/jenkins/minikube-integration/17263-642665/.minikube/addons for local assets ...
	I0918 18:55:43.143506  648496 filesync.go:126] Scanning /home/jenkins/minikube-integration/17263-642665/.minikube/files for local assets ...
	I0918 18:55:43.143534  648496 start.go:303] post-start completed in 128.01724ms
	I0918 18:55:43.143868  648496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-351470
	I0918 18:55:43.161156  648496 profile.go:148] Saving config to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/config.json ...
	I0918 18:55:43.161441  648496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 18:55:43.161493  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:55:43.178880  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:55:43.277852  648496 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0918 18:55:43.283558  648496 start.go:128] duration metric: createHost completed in 12.849631537s
	I0918 18:55:43.283580  648496 start.go:83] releasing machines lock for "addons-351470", held for 12.849765002s
	I0918 18:55:43.283651  648496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-351470
	I0918 18:55:43.301431  648496 ssh_runner.go:195] Run: cat /version.json
	I0918 18:55:43.301454  648496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 18:55:43.301485  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:55:43.301522  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:55:43.321063  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:55:43.322478  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:55:43.416469  648496 ssh_runner.go:195] Run: systemctl --version
	I0918 18:55:43.560468  648496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 18:55:43.710596  648496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0918 18:55:43.716154  648496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 18:55:43.741457  648496 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0918 18:55:43.741537  648496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 18:55:43.777622  648496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0918 18:55:43.777643  648496 start.go:469] detecting cgroup driver to use...
	I0918 18:55:43.777676  648496 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0918 18:55:43.777727  648496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 18:55:43.796535  648496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 18:55:43.810566  648496 docker.go:196] disabling cri-docker service (if available) ...
	I0918 18:55:43.810674  648496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 18:55:43.826976  648496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 18:55:43.844266  648496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 18:55:43.939181  648496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 18:55:44.060817  648496 docker.go:212] disabling docker service ...
	I0918 18:55:44.060924  648496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 18:55:44.083832  648496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 18:55:44.100633  648496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 18:55:44.196546  648496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 18:55:44.303130  648496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 18:55:44.317150  648496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 18:55:44.337602  648496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0918 18:55:44.337669  648496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 18:55:44.350393  648496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 18:55:44.350466  648496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 18:55:44.364380  648496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 18:55:44.383342  648496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 18:55:44.395383  648496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 18:55:44.406558  648496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 18:55:44.417472  648496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 18:55:44.427933  648496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 18:55:44.515011  648496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 18:55:44.644632  648496 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 18:55:44.644717  648496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 18:55:44.649545  648496 start.go:537] Will wait 60s for crictl version
	I0918 18:55:44.649608  648496 ssh_runner.go:195] Run: which crictl
	I0918 18:55:44.654096  648496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 18:55:44.704453  648496 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0918 18:55:44.704552  648496 ssh_runner.go:195] Run: crio --version
	I0918 18:55:44.747608  648496 ssh_runner.go:195] Run: crio --version
	I0918 18:55:44.792302  648496 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I0918 18:55:44.794751  648496 cli_runner.go:164] Run: docker network inspect addons-351470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 18:55:44.811633  648496 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0918 18:55:44.816299  648496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 18:55:44.830011  648496 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0918 18:55:44.830082  648496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 18:55:44.892962  648496 crio.go:496] all images are preloaded for cri-o runtime.
	I0918 18:55:44.892984  648496 crio.go:415] Images already preloaded, skipping extraction
	I0918 18:55:44.893040  648496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 18:55:44.936953  648496 crio.go:496] all images are preloaded for cri-o runtime.
	I0918 18:55:44.936973  648496 cache_images.go:84] Images are preloaded, skipping loading
	I0918 18:55:44.937070  648496 ssh_runner.go:195] Run: crio config
	I0918 18:55:44.993662  648496 cni.go:84] Creating CNI manager for ""
	I0918 18:55:44.993683  648496 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0918 18:55:44.993720  648496 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0918 18:55:44.993744  648496 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-351470 NodeName:addons-351470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 18:55:44.993884  648496 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-351470"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 18:55:44.993953  648496 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-351470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-351470 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0918 18:55:44.994019  648496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0918 18:55:45.010680  648496 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 18:55:45.010770  648496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 18:55:45.034235  648496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0918 18:55:45.081585  648496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 18:55:45.124208  648496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0918 18:55:45.157236  648496 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0918 18:55:45.163549  648496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 18:55:45.184370  648496 certs.go:56] Setting up /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470 for IP: 192.168.49.2
	I0918 18:55:45.184435  648496 certs.go:190] acquiring lock for shared ca certs: {Name:mkb16b377708c2d983623434e9d896d9d8fd7133 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:45.184670  648496 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.key
	I0918 18:55:45.870169  648496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt ...
	I0918 18:55:45.870201  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt: {Name:mk8ce942029a0252572de9cb7b7d9efee3019b19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:45.870416  648496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17263-642665/.minikube/ca.key ...
	I0918 18:55:45.870433  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/ca.key: {Name:mk519f55d35ef0dfd7b5f58eb679af53f0fdf2ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:45.870526  648496 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.key
	I0918 18:55:48.079293  648496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.crt ...
	I0918 18:55:48.079335  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.crt: {Name:mk3dacbced543e99900eaea9b133012dae11b85b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:48.079545  648496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.key ...
	I0918 18:55:48.079554  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.key: {Name:mkc064c1a51f99c9b98de1d53513177dda997c24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:48.079690  648496 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.key
	I0918 18:55:48.079734  648496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt with IP's: []
	I0918 18:55:48.450727  648496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt ...
	I0918 18:55:48.450764  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: {Name:mk9cf70eae8ff62c50839a2cd2c9a29cbe4330ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:48.450965  648496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.key ...
	I0918 18:55:48.450981  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.key: {Name:mk9d001b63a8a7ce465d82d0b39908eac9c7eec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:48.451600  648496 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.key.dd3b5fb2
	I0918 18:55:48.451631  648496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0918 18:55:48.730498  648496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.crt.dd3b5fb2 ...
	I0918 18:55:48.730534  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.crt.dd3b5fb2: {Name:mk6e6762897d4c7e3e3cde69c2e29c2bec36ef38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:48.731200  648496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.key.dd3b5fb2 ...
	I0918 18:55:48.731225  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.key.dd3b5fb2: {Name:mk2bc6742a825845966f3c6be3f59c519d0c0961 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:48.731312  648496 certs.go:337] copying /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.crt
	I0918 18:55:48.731381  648496 certs.go:341] copying /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.key
	I0918 18:55:48.731434  648496 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/proxy-client.key
	I0918 18:55:48.731453  648496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/proxy-client.crt with IP's: []
	I0918 18:55:50.129170  648496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/proxy-client.crt ...
	I0918 18:55:50.129208  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/proxy-client.crt: {Name:mkfae3f3218f2f6445507927280b4e94eeda031a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:50.129955  648496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/proxy-client.key ...
	I0918 18:55:50.129974  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/proxy-client.key: {Name:mk381624f0d7b6e5a5f6676b7678903363d91ae2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:50.130186  648496 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 18:55:50.130235  648496 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem (1082 bytes)
	I0918 18:55:50.130271  648496 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem (1123 bytes)
	I0918 18:55:50.130302  648496 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem (1675 bytes)
	I0918 18:55:50.130998  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0918 18:55:50.163526  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 18:55:50.196531  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 18:55:50.226293  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 18:55:50.256316  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 18:55:50.285901  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 18:55:50.315637  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 18:55:50.344082  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0918 18:55:50.372471  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 18:55:50.400936  648496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 18:55:50.422129  648496 ssh_runner.go:195] Run: openssl version
	I0918 18:55:50.429557  648496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 18:55:50.441465  648496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 18:55:50.446311  648496 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 18 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I0918 18:55:50.446389  648496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 18:55:50.455229  648496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 18:55:50.467050  648496 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0918 18:55:50.471653  648496 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0918 18:55:50.471752  648496 kubeadm.go:404] StartCluster: {Name:addons-351470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-351470 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 18:55:50.471918  648496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 18:55:50.471981  648496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 18:55:50.514883  648496 cri.go:89] found id: ""
	I0918 18:55:50.514956  648496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 18:55:50.525580  648496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 18:55:50.536571  648496 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0918 18:55:50.536688  648496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 18:55:50.547610  648496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 18:55:50.547649  648496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0918 18:55:50.601042  648496 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0918 18:55:50.601287  648496 kubeadm.go:322] [preflight] Running pre-flight checks
	I0918 18:55:50.646727  648496 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0918 18:55:50.646839  648496 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1044-aws
	I0918 18:55:50.646898  648496 kubeadm.go:322] OS: Linux
	I0918 18:55:50.646970  648496 kubeadm.go:322] CGROUPS_CPU: enabled
	I0918 18:55:50.647042  648496 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0918 18:55:50.647104  648496 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0918 18:55:50.647174  648496 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0918 18:55:50.647234  648496 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0918 18:55:50.647305  648496 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0918 18:55:50.647364  648496 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0918 18:55:50.647431  648496 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0918 18:55:50.647550  648496 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0918 18:55:50.735171  648496 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 18:55:50.735330  648496 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 18:55:50.735462  648496 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 18:55:50.996777  648496 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 18:55:50.999954  648496 out.go:204]   - Generating certificates and keys ...
	I0918 18:55:51.000099  648496 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0918 18:55:51.000161  648496 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0918 18:55:51.354564  648496 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 18:55:52.016551  648496 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0918 18:55:52.403400  648496 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0918 18:55:53.058358  648496 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0918 18:55:53.641305  648496 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0918 18:55:53.641825  648496 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-351470 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0918 18:55:54.290652  648496 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0918 18:55:54.291176  648496 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-351470 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0918 18:55:54.603528  648496 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 18:55:54.836218  648496 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 18:55:55.276696  648496 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0918 18:55:55.277093  648496 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 18:55:55.713068  648496 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 18:55:56.271080  648496 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 18:55:56.591944  648496 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 18:55:56.979931  648496 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 18:55:56.980519  648496 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 18:55:56.983114  648496 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 18:55:56.986710  648496 out.go:204]   - Booting up control plane ...
	I0918 18:55:56.986863  648496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 18:55:56.986941  648496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 18:55:56.987586  648496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 18:55:56.998638  648496 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 18:55:56.999626  648496 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 18:55:56.999823  648496 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0918 18:55:57.103960  648496 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 18:56:05.107293  648496 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003397 seconds
	I0918 18:56:05.107414  648496 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 18:56:05.124454  648496 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 18:56:05.651036  648496 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 18:56:05.651224  648496 kubeadm.go:322] [mark-control-plane] Marking the node addons-351470 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 18:56:06.163427  648496 kubeadm.go:322] [bootstrap-token] Using token: z2ghwa.ius3vvohde9l6hlk
	I0918 18:56:06.165794  648496 out.go:204]   - Configuring RBAC rules ...
	I0918 18:56:06.165923  648496 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 18:56:06.172969  648496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 18:56:06.181749  648496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 18:56:06.187891  648496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 18:56:06.192282  648496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 18:56:06.197809  648496 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 18:56:06.216027  648496 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 18:56:06.475987  648496 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0918 18:56:06.612146  648496 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0918 18:56:06.612163  648496 kubeadm.go:322] 
	I0918 18:56:06.612220  648496 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0918 18:56:06.612225  648496 kubeadm.go:322] 
	I0918 18:56:06.612297  648496 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0918 18:56:06.612302  648496 kubeadm.go:322] 
	I0918 18:56:06.612325  648496 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0918 18:56:06.612387  648496 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 18:56:06.612434  648496 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 18:56:06.612439  648496 kubeadm.go:322] 
	I0918 18:56:06.612489  648496 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0918 18:56:06.612494  648496 kubeadm.go:322] 
	I0918 18:56:06.612539  648496 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 18:56:06.612544  648496 kubeadm.go:322] 
	I0918 18:56:06.612593  648496 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0918 18:56:06.612663  648496 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 18:56:06.612727  648496 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 18:56:06.612731  648496 kubeadm.go:322] 
	I0918 18:56:06.612810  648496 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 18:56:06.612882  648496 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0918 18:56:06.612886  648496 kubeadm.go:322] 
	I0918 18:56:06.612965  648496 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token z2ghwa.ius3vvohde9l6hlk \
	I0918 18:56:06.613061  648496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1471e1bb7c66f1f1f8363746a1e5f2ae35a8554d6ad2342a0b3973b70608e7c8 \
	I0918 18:56:06.613081  648496 kubeadm.go:322] 	--control-plane 
	I0918 18:56:06.613086  648496 kubeadm.go:322] 
	I0918 18:56:06.613165  648496 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0918 18:56:06.613171  648496 kubeadm.go:322] 
	I0918 18:56:06.613247  648496 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token z2ghwa.ius3vvohde9l6hlk \
	I0918 18:56:06.613343  648496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1471e1bb7c66f1f1f8363746a1e5f2ae35a8554d6ad2342a0b3973b70608e7c8 
	I0918 18:56:06.615591  648496 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-aws\n", err: exit status 1
	I0918 18:56:06.615710  648496 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 18:56:06.615877  648496 cni.go:84] Creating CNI manager for ""
	I0918 18:56:06.615890  648496 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0918 18:56:06.618406  648496 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0918 18:56:06.620660  648496 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0918 18:56:06.629983  648496 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I0918 18:56:06.630001  648496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0918 18:56:06.675116  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0918 18:56:07.589794  648496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 18:56:07.589917  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:07.590002  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36 minikube.k8s.io/name=addons-351470 minikube.k8s.io/updated_at=2023_09_18T18_56_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:07.739098  648496 ops.go:34] apiserver oom_adj: -16
	I0918 18:56:07.739203  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:07.845481  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:08.456785  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:08.957142  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:09.457069  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:09.956977  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:10.456743  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:10.956760  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:11.456252  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:11.956392  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:12.456261  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:12.956601  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:13.457114  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:13.957003  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:14.456801  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:14.956705  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:15.456741  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:15.956290  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:16.456732  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:16.956968  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:17.456256  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:17.956671  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:18.456715  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:18.552997  648496 kubeadm.go:1081] duration metric: took 10.96312205s to wait for elevateKubeSystemPrivileges.
	I0918 18:56:18.553024  648496 kubeadm.go:406] StartCluster complete in 28.081276711s
	I0918 18:56:18.553041  648496 settings.go:142] acquiring lock: {Name:mk1cee0139b5f0ae29a168e7793f3f69abc95f11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:56:18.553162  648496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 18:56:18.553549  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/kubeconfig: {Name:mkbc55d6d811840d4d5667f8f39c79585e0314ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:56:18.554276  648496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 18:56:18.554564  648496 config.go:182] Loaded profile config "addons-351470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0918 18:56:18.554674  648496 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0918 18:56:18.554761  648496 addons.go:69] Setting volumesnapshots=true in profile "addons-351470"
	I0918 18:56:18.554777  648496 addons.go:231] Setting addon volumesnapshots=true in "addons-351470"
	I0918 18:56:18.554816  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.555271  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.555751  648496 addons.go:69] Setting cloud-spanner=true in profile "addons-351470"
	I0918 18:56:18.555770  648496 addons.go:231] Setting addon cloud-spanner=true in "addons-351470"
	I0918 18:56:18.555824  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.556203  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.556731  648496 addons.go:69] Setting inspektor-gadget=true in profile "addons-351470"
	I0918 18:56:18.556757  648496 addons.go:231] Setting addon inspektor-gadget=true in "addons-351470"
	I0918 18:56:18.556789  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.557186  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.557503  648496 addons.go:69] Setting metrics-server=true in profile "addons-351470"
	I0918 18:56:18.557524  648496 addons.go:231] Setting addon metrics-server=true in "addons-351470"
	I0918 18:56:18.557562  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.557933  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.564013  648496 addons.go:69] Setting registry=true in profile "addons-351470"
	I0918 18:56:18.564046  648496 addons.go:231] Setting addon registry=true in "addons-351470"
	I0918 18:56:18.564092  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.564522  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.567372  648496 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-351470"
	I0918 18:56:18.567517  648496 addons.go:69] Setting storage-provisioner=true in profile "addons-351470"
	I0918 18:56:18.567547  648496 addons.go:231] Setting addon storage-provisioner=true in "addons-351470"
	I0918 18:56:18.567599  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.572257  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.572456  648496 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-351470"
	I0918 18:56:18.572616  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.572759  648496 addons.go:69] Setting default-storageclass=true in profile "addons-351470"
	I0918 18:56:18.572785  648496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-351470"
	I0918 18:56:18.573062  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.586841  648496 addons.go:69] Setting gcp-auth=true in profile "addons-351470"
	I0918 18:56:18.586927  648496 mustload.go:65] Loading cluster: addons-351470
	I0918 18:56:18.587126  648496 config.go:182] Loaded profile config "addons-351470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0918 18:56:18.587378  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.609655  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.610530  648496 addons.go:69] Setting ingress=true in profile "addons-351470"
	I0918 18:56:18.610565  648496 addons.go:231] Setting addon ingress=true in "addons-351470"
	I0918 18:56:18.623072  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.623558  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.620452  648496 addons.go:69] Setting ingress-dns=true in profile "addons-351470"
	I0918 18:56:18.695926  648496 addons.go:231] Setting addon ingress-dns=true in "addons-351470"
	I0918 18:56:18.696008  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.696484  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.714467  648496 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I0918 18:56:18.727522  648496 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0918 18:56:18.729540  648496 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 18:56:18.729561  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 18:56:18.729626  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:18.727857  648496 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0918 18:56:18.729875  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0918 18:56:18.729948  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:18.727864  648496 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0918 18:56:18.735038  648496 out.go:177]   - Using image docker.io/registry:2.8.1
	I0918 18:56:18.745610  648496 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0918 18:56:18.744375  648496 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0918 18:56:18.751863  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0918 18:56:18.751952  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:18.755350  648496 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0918 18:56:18.757469  648496 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0918 18:56:18.757491  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0918 18:56:18.757578  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:18.758327  648496 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0918 18:56:18.758347  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0918 18:56:18.758400  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:18.819201  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.844223  648496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 18:56:18.854803  648496 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 18:56:18.854825  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 18:56:18.854882  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:18.853596  648496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 18:56:18.853700  648496 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-351470" context rescaled to 1 replicas
	I0918 18:56:18.866436  648496 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 18:56:18.869947  648496 addons.go:231] Setting addon default-storageclass=true in "addons-351470"
	I0918 18:56:18.874208  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.874120  648496 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0918 18:56:18.876298  648496 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0918 18:56:18.874131  648496 out.go:177] * Verifying Kubernetes components...
	I0918 18:56:18.874136  648496 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0918 18:56:18.874140  648496 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0918 18:56:18.874969  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.888082  648496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 18:56:18.891906  648496 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0918 18:56:18.895989  648496 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0918 18:56:18.896012  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0918 18:56:18.896081  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:18.919884  648496 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0918 18:56:18.920941  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:18.928980  648496 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0918 18:56:18.926631  648496 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0918 18:56:18.939660  648496 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.2
	I0918 18:56:18.948157  648496 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0918 18:56:18.955294  648496 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0918 18:56:18.948495  648496 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0918 18:56:18.955700  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:18.964137  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0918 18:56:18.964266  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:18.968033  648496 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0918 18:56:18.965985  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:18.966731  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:18.976771  648496 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0918 18:56:18.976790  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0918 18:56:18.976861  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:19.014359  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:19.015930  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:19.063230  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:19.065213  648496 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 18:56:19.065234  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 18:56:19.065297  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:19.094836  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:19.115861  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:19.139857  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:19.320691  648496 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0918 18:56:19.320759  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0918 18:56:19.378000  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0918 18:56:19.399831  648496 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0918 18:56:19.399902  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0918 18:56:19.438372  648496 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0918 18:56:19.438442  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0918 18:56:19.498565  648496 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0918 18:56:19.498641  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0918 18:56:19.507933  648496 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 18:56:19.508007  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0918 18:56:19.524162  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 18:56:19.546516  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0918 18:56:19.551992  648496 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0918 18:56:19.552053  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0918 18:56:19.555463  648496 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0918 18:56:19.555522  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0918 18:56:19.567461  648496 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0918 18:56:19.567532  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0918 18:56:19.572281  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0918 18:56:19.644669  648496 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 18:56:19.644695  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 18:56:19.647124  648496 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0918 18:56:19.647145  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0918 18:56:19.662292  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 18:56:19.679570  648496 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0918 18:56:19.679598  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0918 18:56:19.692185  648496 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0918 18:56:19.692214  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0918 18:56:19.695057  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0918 18:56:19.798045  648496 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 18:56:19.798072  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 18:56:19.800992  648496 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0918 18:56:19.801018  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0918 18:56:19.853732  648496 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0918 18:56:19.853767  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0918 18:56:19.860614  648496 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0918 18:56:19.860648  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0918 18:56:19.976412  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 18:56:20.008962  648496 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 18:56:20.008989  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0918 18:56:20.023569  648496 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0918 18:56:20.023606  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0918 18:56:20.068831  648496 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0918 18:56:20.068862  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0918 18:56:20.149023  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 18:56:20.207594  648496 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0918 18:56:20.207623  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0918 18:56:20.245366  648496 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0918 18:56:20.245393  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0918 18:56:20.332325  648496 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0918 18:56:20.332357  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0918 18:56:20.345260  648496 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 18:56:20.345287  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0918 18:56:20.456129  648496 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0918 18:56:20.456162  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0918 18:56:20.463764  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 18:56:20.565478  648496 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0918 18:56:20.565503  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0918 18:56:20.666263  648496 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0918 18:56:20.666287  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0918 18:56:20.826439  648496 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 18:56:20.826473  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0918 18:56:21.005937  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 18:56:21.235946  648496 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.34777392s)
	I0918 18:56:21.236073  648496 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.380866979s)
	I0918 18:56:21.236092  648496 start.go:917] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0918 18:56:21.236930  648496 node_ready.go:35] waiting up to 6m0s for node "addons-351470" to be "Ready" ...
	I0918 18:56:22.857372  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.47929133s)
	I0918 18:56:23.326748  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:23.685051  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.160805428s)
	I0918 18:56:23.685132  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.138549779s)
	I0918 18:56:24.342979  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.680657509s)
	I0918 18:56:24.343049  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.647963508s)
	I0918 18:56:24.343073  648496 addons.go:467] Verifying addon registry=true in "addons-351470"
	I0918 18:56:24.343102  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.770581646s)
	I0918 18:56:24.343116  648496 addons.go:467] Verifying addon ingress=true in "addons-351470"
	I0918 18:56:24.346311  648496 out.go:177] * Verifying ingress addon...
	I0918 18:56:24.343418  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.194354268s)
	I0918 18:56:24.343465  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.879646063s)
	I0918 18:56:24.343614  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.366891774s)
	I0918 18:56:24.348894  648496 out.go:177] * Verifying registry addon...
	W0918 18:56:24.348972  648496 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 18:56:24.349085  648496 addons.go:467] Verifying addon metrics-server=true in "addons-351470"
	I0918 18:56:24.352011  648496 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0918 18:56:24.356141  648496 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0918 18:56:24.352449  648496 retry.go:31] will retry after 131.401325ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 18:56:24.365238  648496 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0918 18:56:24.365269  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:24.370785  648496 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0918 18:56:24.370809  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:24.411919  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:24.428203  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:24.488695  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 18:56:24.778059  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.772062289s)
	I0918 18:56:24.778105  648496 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-351470"
	I0918 18:56:24.780408  648496 out.go:177] * Verifying csi-hostpath-driver addon...
	I0918 18:56:24.784325  648496 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0918 18:56:24.793994  648496 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0918 18:56:24.794041  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:24.799966  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:24.917059  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:24.940532  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:25.306559  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:25.421789  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:25.436265  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:25.748599  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:25.804802  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:25.923763  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:25.947261  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:26.152067  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.663321748s)
	I0918 18:56:26.312652  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:26.417316  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:26.433263  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:26.808752  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:26.895728  648496 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0918 18:56:26.895825  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:26.916466  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:26.933670  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:26.940339  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:27.149234  648496 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0918 18:56:27.226398  648496 addons.go:231] Setting addon gcp-auth=true in "addons-351470"
	I0918 18:56:27.226456  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:27.226960  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:27.267282  648496 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0918 18:56:27.267333  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:27.310702  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:27.317109  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:27.417509  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:27.433203  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:27.478032  648496 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0918 18:56:27.480683  648496 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0918 18:56:27.483116  648496 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0918 18:56:27.483142  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0918 18:56:27.541254  648496 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0918 18:56:27.541287  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0918 18:56:27.602288  648496 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 18:56:27.602313  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0918 18:56:27.663212  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 18:56:27.807752  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:27.922509  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:27.938098  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:28.237584  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:28.317366  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:28.418172  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:28.433137  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:28.780726  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.11747561s)
	I0918 18:56:28.782478  648496 addons.go:467] Verifying addon gcp-auth=true in "addons-351470"
	I0918 18:56:28.785793  648496 out.go:177] * Verifying gcp-auth addon...
	I0918 18:56:28.789067  648496 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0918 18:56:28.840500  648496 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0918 18:56:28.840565  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:28.844673  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:28.868800  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:28.918129  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:28.933255  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:29.312627  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:29.373579  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:29.417252  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:29.432413  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:29.806124  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:29.874215  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:29.917500  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:29.932940  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:30.312644  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:30.373711  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:30.417450  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:30.433233  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:30.736120  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:30.805033  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:30.872929  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:30.916959  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:30.933279  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:31.311827  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:31.372820  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:31.416588  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:31.433309  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:31.805408  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:31.873257  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:31.917532  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:31.932837  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:32.308015  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:32.374178  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:32.417102  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:32.433852  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:32.736555  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:32.805378  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:32.873364  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:32.916905  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:32.933373  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:33.321493  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:33.373174  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:33.420643  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:33.433304  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:33.816093  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:33.893341  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:33.916665  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:33.934287  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:34.309789  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:34.373178  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:34.417000  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:34.433519  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:34.805056  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:34.882814  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:34.916589  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:34.932901  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:35.237249  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:35.313370  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:35.373643  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:35.417191  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:35.432702  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:35.805714  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:35.874178  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:35.919531  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:35.933044  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:36.305491  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:36.372877  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:36.422970  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:36.432807  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:36.805573  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:36.873305  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:36.916679  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:36.932478  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:37.304947  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:37.373101  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:37.416478  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:37.432591  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:37.735462  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:37.805100  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:37.873244  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:37.916859  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:37.933021  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:38.304204  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:38.372840  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:38.417011  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:38.433049  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:38.804679  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:38.872614  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:38.916529  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:38.932816  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:39.305561  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:39.373423  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:39.416597  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:39.432684  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:39.736392  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:39.805131  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:39.873481  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:39.916262  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:39.932376  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:40.307310  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:40.373099  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:40.416981  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:40.432946  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:40.804282  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:40.872977  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:40.917563  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:40.932829  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:41.304995  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:41.376265  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:41.416500  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:41.432503  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:41.804607  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:41.873078  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:41.916236  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:41.933076  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:42.236651  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:42.305320  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:42.373379  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:42.416830  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:42.433169  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:42.804264  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:42.873099  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:42.916657  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:42.932764  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:43.305276  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:43.372793  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:43.416919  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:43.433039  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:43.805323  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:43.873238  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:43.916014  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:43.933098  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:44.305197  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:44.373252  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:44.416808  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:44.433093  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:44.736347  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:44.804820  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:44.872372  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:44.916577  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:44.932868  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:45.309364  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:45.376029  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:45.417652  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:45.434726  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:45.805244  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:45.873242  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:45.918211  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:45.932209  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:46.305538  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:46.373371  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:46.416664  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:46.432703  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:46.804848  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:46.872816  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:46.916077  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:46.933090  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:47.236280  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:47.304629  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:47.373217  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:47.416916  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:47.433087  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:47.804448  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:47.873623  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:47.916646  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:47.932878  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:48.306215  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:48.372718  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:48.416381  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:48.432432  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:48.804701  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:48.873345  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:48.916470  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:48.932535  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:49.236949  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:49.305438  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:49.373046  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:49.416079  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:49.432887  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:49.805383  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:49.872559  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:49.916372  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:49.932566  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:50.305898  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:50.372775  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:50.416632  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:50.432592  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:50.805095  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:50.873285  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:50.916625  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:50.932582  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:51.304889  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:51.372851  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:51.416355  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:51.432841  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:51.736929  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:51.804795  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:51.873260  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:51.916867  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:51.933005  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:52.305489  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:52.372630  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:52.417155  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:52.432259  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:52.762208  648496 node_ready.go:49] node "addons-351470" has status "Ready":"True"
	I0918 18:56:52.762234  648496 node_ready.go:38] duration metric: took 31.525262638s waiting for node "addons-351470" to be "Ready" ...
	I0918 18:56:52.762250  648496 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 18:56:52.781332  648496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hfcps" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:52.825250  648496 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0918 18:56:52.825280  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:52.878340  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:52.927458  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:52.955009  648496 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0918 18:56:52.955036  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:53.358995  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:53.398473  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:53.419236  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:53.491150  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:53.806181  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:53.873265  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:53.916547  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:53.933002  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:54.310688  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:54.353704  648496 pod_ready.go:92] pod "coredns-5dd5756b68-hfcps" in "kube-system" namespace has status "Ready":"True"
	I0918 18:56:54.353728  648496 pod_ready.go:81] duration metric: took 1.572361024s waiting for pod "coredns-5dd5756b68-hfcps" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:54.353754  648496 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-351470" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:54.359315  648496 pod_ready.go:92] pod "etcd-addons-351470" in "kube-system" namespace has status "Ready":"True"
	I0918 18:56:54.359345  648496 pod_ready.go:81] duration metric: took 5.579601ms waiting for pod "etcd-addons-351470" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:54.359360  648496 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-351470" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:54.367447  648496 pod_ready.go:92] pod "kube-apiserver-addons-351470" in "kube-system" namespace has status "Ready":"True"
	I0918 18:56:54.367472  648496 pod_ready.go:81] duration metric: took 8.104062ms waiting for pod "kube-apiserver-addons-351470" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:54.367484  648496 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-351470" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:54.373856  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:54.374258  648496 pod_ready.go:92] pod "kube-controller-manager-addons-351470" in "kube-system" namespace has status "Ready":"True"
	I0918 18:56:54.374276  648496 pod_ready.go:81] duration metric: took 6.784489ms waiting for pod "kube-controller-manager-addons-351470" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:54.374290  648496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f7vqg" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:54.416291  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:54.434255  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:54.737468  648496 pod_ready.go:92] pod "kube-proxy-f7vqg" in "kube-system" namespace has status "Ready":"True"
	I0918 18:56:54.737556  648496 pod_ready.go:81] duration metric: took 363.256598ms waiting for pod "kube-proxy-f7vqg" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:54.737593  648496 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-351470" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:54.806063  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:54.873153  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:54.917383  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:54.933228  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:55.138312  648496 pod_ready.go:92] pod "kube-scheduler-addons-351470" in "kube-system" namespace has status "Ready":"True"
	I0918 18:56:55.138342  648496 pod_ready.go:81] duration metric: took 400.724007ms waiting for pod "kube-scheduler-addons-351470" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:55.138376  648496 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:55.310785  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:55.372650  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:55.416597  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:55.438038  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:55.805788  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:55.872695  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:55.917333  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:55.933510  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:56.306719  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:56.372811  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:56.417292  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:56.432728  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:56.806396  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:56.873015  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:56.924773  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:56.939987  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:57.307659  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:57.372450  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:57.416727  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:57.434254  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:57.443436  648496 pod_ready.go:102] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"False"
	I0918 18:56:57.807954  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:57.872953  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:57.917125  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:57.935872  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:58.307679  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:58.373698  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:58.417273  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:58.433109  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:58.806650  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:58.873593  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:58.917570  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:58.944423  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:59.310532  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:59.373621  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:59.416965  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:59.433707  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:59.443734  648496 pod_ready.go:102] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"False"
	I0918 18:56:59.806457  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:59.873159  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:59.917078  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:59.933507  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:00.309788  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:00.376613  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:00.417696  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:00.433903  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:00.807704  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:00.873760  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:00.917464  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:00.933572  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:01.321797  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:01.372983  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:01.417232  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:01.433565  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:01.444374  648496 pod_ready.go:102] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"False"
	I0918 18:57:01.807336  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:01.872873  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:01.916201  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:01.932560  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:02.306673  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:02.373712  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:02.416988  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:02.434568  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:02.806200  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:02.874209  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:02.918434  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:02.933603  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:03.316803  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:03.374168  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:03.417480  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:03.435390  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:03.453459  648496 pod_ready.go:102] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"False"
	I0918 18:57:03.807190  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:03.874858  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:03.917146  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:03.933611  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:04.309007  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:04.373589  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:04.428247  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:04.434673  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:04.808721  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:04.873735  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:04.924666  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:04.934117  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:05.306852  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:05.373090  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:05.416886  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:05.434087  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:05.810737  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:05.873204  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:05.917655  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:05.933468  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:05.945891  648496 pod_ready.go:102] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"False"
	I0918 18:57:06.309849  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:06.372695  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:06.450145  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:06.471631  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:06.808598  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:06.873844  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:06.924066  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:06.940323  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:07.310271  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:07.374013  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:07.418386  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:07.433746  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:07.806306  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:07.873563  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:07.917658  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:07.935118  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:07.949262  648496 pod_ready.go:102] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"False"
	I0918 18:57:08.305953  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:08.373459  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:08.417509  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:08.437500  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:08.806183  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:08.872225  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:08.917444  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:08.933647  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:09.321097  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:09.375010  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:09.421695  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:09.436635  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:09.808263  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:09.873123  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:09.917348  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:09.934160  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:09.954266  648496 pod_ready.go:102] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"False"
	I0918 18:57:10.311280  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:10.373510  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:10.420784  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:10.481602  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:10.806469  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:10.874071  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:10.917563  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:10.935878  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:11.308805  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:11.373605  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:11.417529  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:11.434106  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:11.807832  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:11.876742  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:11.917777  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:11.944716  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:12.311020  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:12.381506  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:12.441795  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:12.444543  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:12.463916  648496 pod_ready.go:102] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"False"
	I0918 18:57:12.806808  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:12.873086  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:12.917402  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:12.933488  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:13.305861  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:13.373254  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:13.416629  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:13.439170  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:13.806292  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:13.877358  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:13.917547  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:13.935185  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:14.306178  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:14.372728  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:14.417853  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:14.433543  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:14.806295  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:14.872675  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:14.918907  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:14.933748  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:14.947006  648496 pod_ready.go:102] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"False"
	I0918 18:57:15.307096  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:15.373510  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:15.420518  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:15.435554  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:15.807417  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:15.874654  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:15.918704  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:15.934103  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:16.321895  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:16.375005  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:16.418710  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:16.436638  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:16.808519  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:16.873436  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:16.918831  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:16.936687  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:16.953164  648496 pod_ready.go:102] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"False"
	I0918 18:57:17.307558  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:17.373518  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:17.417058  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:17.436200  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:17.806027  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:17.875469  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:17.916837  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:17.935035  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:17.944366  648496 pod_ready.go:92] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"True"
	I0918 18:57:17.944437  648496 pod_ready.go:81] duration metric: took 22.806050374s waiting for pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace to be "Ready" ...
	I0918 18:57:17.944473  648496 pod_ready.go:38] duration metric: took 25.182210097s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 18:57:17.944516  648496 api_server.go:52] waiting for apiserver process to appear ...
	I0918 18:57:17.944606  648496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 18:57:17.970173  648496 api_server.go:72] duration metric: took 59.10365399s to wait for apiserver process to appear ...
	I0918 18:57:17.970252  648496 api_server.go:88] waiting for apiserver healthz status ...
	I0918 18:57:17.970299  648496 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0918 18:57:17.980621  648496 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0918 18:57:17.982021  648496 api_server.go:141] control plane version: v1.28.2
	I0918 18:57:17.982047  648496 api_server.go:131] duration metric: took 11.761001ms to wait for apiserver health ...
	I0918 18:57:17.982057  648496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 18:57:18.002736  648496 system_pods.go:59] 17 kube-system pods found
	I0918 18:57:18.002827  648496 system_pods.go:61] "coredns-5dd5756b68-hfcps" [60a3199b-71b3-4769-b9fd-2e8f4a3063b5] Running
	I0918 18:57:18.002849  648496 system_pods.go:61] "csi-hostpath-attacher-0" [72054a84-2928-4def-a4e3-90aa1e60bcb0] Running
	I0918 18:57:18.002870  648496 system_pods.go:61] "csi-hostpath-resizer-0" [3c473d2f-2f48-4ba5-a9ef-2213775d2843] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 18:57:18.002911  648496 system_pods.go:61] "csi-hostpathplugin-cknjm" [2a99e83a-561e-4f98-92f0-b213f2657cdb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 18:57:18.002935  648496 system_pods.go:61] "etcd-addons-351470" [67afaa28-c8c1-4f45-97c1-b7c5805b1591] Running
	I0918 18:57:18.002966  648496 system_pods.go:61] "kindnet-ndjjv" [70d06c5a-515c-44f9-8911-6a675242a745] Running
	I0918 18:57:18.002984  648496 system_pods.go:61] "kube-apiserver-addons-351470" [042d8e5f-afca-459b-b1d8-808d33ab8130] Running
	I0918 18:57:18.003011  648496 system_pods.go:61] "kube-controller-manager-addons-351470" [8586a2db-55e9-40f5-877d-a0028183b2b3] Running
	I0918 18:57:18.003040  648496 system_pods.go:61] "kube-ingress-dns-minikube" [38dc4cd8-2f37-4e52-a13c-99f3ec43b6b1] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0918 18:57:18.003059  648496 system_pods.go:61] "kube-proxy-f7vqg" [b3be5896-8575-4aa0-b619-366a58271688] Running
	I0918 18:57:18.003080  648496 system_pods.go:61] "kube-scheduler-addons-351470" [5e985ca7-101f-41ea-a182-d1428c8b509f] Running
	I0918 18:57:18.003099  648496 system_pods.go:61] "metrics-server-7c66d45ddc-z9mjl" [5d85482f-b583-40c4-b7e9-0174b3dedab1] Running
	I0918 18:57:18.003132  648496 system_pods.go:61] "registry-9gb28" [527d0996-363b-4641-aba2-49d6b29da00c] Running
	I0918 18:57:18.003155  648496 system_pods.go:61] "registry-proxy-gzc8v" [b1fe082f-9b6f-41d3-964b-615c0229250d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0918 18:57:18.003175  648496 system_pods.go:61] "snapshot-controller-58dbcc7b99-9bh9g" [fbff0e9a-2906-475a-9447-fa87bc4a5c7a] Running
	I0918 18:57:18.003208  648496 system_pods.go:61] "snapshot-controller-58dbcc7b99-wzvtx" [3310d460-50dc-4e21-b422-128850e43a41] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 18:57:18.003230  648496 system_pods.go:61] "storage-provisioner" [6ef0b042-ff4f-49b7-aa27-350439b42e37] Running
	I0918 18:57:18.003292  648496 system_pods.go:74] duration metric: took 21.228148ms to wait for pod list to return data ...
	I0918 18:57:18.003328  648496 default_sa.go:34] waiting for default service account to be created ...
	I0918 18:57:18.015873  648496 default_sa.go:45] found service account: "default"
	I0918 18:57:18.015971  648496 default_sa.go:55] duration metric: took 12.624128ms for default service account to be created ...
	I0918 18:57:18.015998  648496 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 18:57:18.034089  648496 system_pods.go:86] 17 kube-system pods found
	I0918 18:57:18.034181  648496 system_pods.go:89] "coredns-5dd5756b68-hfcps" [60a3199b-71b3-4769-b9fd-2e8f4a3063b5] Running
	I0918 18:57:18.034204  648496 system_pods.go:89] "csi-hostpath-attacher-0" [72054a84-2928-4def-a4e3-90aa1e60bcb0] Running
	I0918 18:57:18.034227  648496 system_pods.go:89] "csi-hostpath-resizer-0" [3c473d2f-2f48-4ba5-a9ef-2213775d2843] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 18:57:18.034281  648496 system_pods.go:89] "csi-hostpathplugin-cknjm" [2a99e83a-561e-4f98-92f0-b213f2657cdb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 18:57:18.034315  648496 system_pods.go:89] "etcd-addons-351470" [67afaa28-c8c1-4f45-97c1-b7c5805b1591] Running
	I0918 18:57:18.034341  648496 system_pods.go:89] "kindnet-ndjjv" [70d06c5a-515c-44f9-8911-6a675242a745] Running
	I0918 18:57:18.034361  648496 system_pods.go:89] "kube-apiserver-addons-351470" [042d8e5f-afca-459b-b1d8-808d33ab8130] Running
	I0918 18:57:18.034399  648496 system_pods.go:89] "kube-controller-manager-addons-351470" [8586a2db-55e9-40f5-877d-a0028183b2b3] Running
	I0918 18:57:18.034431  648496 system_pods.go:89] "kube-ingress-dns-minikube" [38dc4cd8-2f37-4e52-a13c-99f3ec43b6b1] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0918 18:57:18.034453  648496 system_pods.go:89] "kube-proxy-f7vqg" [b3be5896-8575-4aa0-b619-366a58271688] Running
	I0918 18:57:18.034473  648496 system_pods.go:89] "kube-scheduler-addons-351470" [5e985ca7-101f-41ea-a182-d1428c8b509f] Running
	I0918 18:57:18.034505  648496 system_pods.go:89] "metrics-server-7c66d45ddc-z9mjl" [5d85482f-b583-40c4-b7e9-0174b3dedab1] Running
	I0918 18:57:18.034531  648496 system_pods.go:89] "registry-9gb28" [527d0996-363b-4641-aba2-49d6b29da00c] Running
	I0918 18:57:18.034553  648496 system_pods.go:89] "registry-proxy-gzc8v" [b1fe082f-9b6f-41d3-964b-615c0229250d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0918 18:57:18.034574  648496 system_pods.go:89] "snapshot-controller-58dbcc7b99-9bh9g" [fbff0e9a-2906-475a-9447-fa87bc4a5c7a] Running
	I0918 18:57:18.034607  648496 system_pods.go:89] "snapshot-controller-58dbcc7b99-wzvtx" [3310d460-50dc-4e21-b422-128850e43a41] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 18:57:18.034637  648496 system_pods.go:89] "storage-provisioner" [6ef0b042-ff4f-49b7-aa27-350439b42e37] Running
	I0918 18:57:18.034660  648496 system_pods.go:126] duration metric: took 18.644937ms to wait for k8s-apps to be running ...
	I0918 18:57:18.034681  648496 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 18:57:18.034767  648496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 18:57:18.076951  648496 system_svc.go:56] duration metric: took 42.259018ms WaitForService to wait for kubelet.
	I0918 18:57:18.077028  648496 kubeadm.go:581] duration metric: took 59.210514691s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0918 18:57:18.077065  648496 node_conditions.go:102] verifying NodePressure condition ...
	I0918 18:57:18.081574  648496 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0918 18:57:18.081666  648496 node_conditions.go:123] node cpu capacity is 2
	I0918 18:57:18.081699  648496 node_conditions.go:105] duration metric: took 4.613532ms to run NodePressure ...
	I0918 18:57:18.081741  648496 start.go:228] waiting for startup goroutines ...
	I0918 18:57:18.305992  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:18.375009  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:18.417430  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:18.439056  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:18.806367  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:18.873009  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:18.916642  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:18.933491  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:19.306797  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:19.372435  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:19.416787  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:19.433120  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:19.806364  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:19.874180  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:19.918600  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:19.934337  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:20.306710  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:20.375020  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:20.417168  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:20.433590  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:20.811180  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:20.872986  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:20.917300  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:20.932628  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:21.306354  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:21.372919  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:21.416319  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:21.432809  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:21.810054  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:21.873286  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:21.916172  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:21.932684  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:22.306790  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:22.373330  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:22.417659  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:22.433319  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:22.806609  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:22.873132  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:22.918032  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:22.934025  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:23.315567  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:23.381142  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:23.418497  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:23.436876  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:23.807369  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:23.873628  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:23.917352  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:23.937812  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:24.307309  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:24.374125  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:24.417616  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:24.435553  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:24.806977  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:24.873067  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:24.925049  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:24.933628  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:25.309798  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:25.377750  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:25.420566  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:25.434289  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:25.806501  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:25.872959  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:25.916551  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:25.937003  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:26.306882  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:26.373754  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:26.438253  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:26.449160  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:26.806777  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:26.874670  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:26.926869  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:26.933890  648496 kapi.go:107] duration metric: took 1m2.577745074s to wait for kubernetes.io/minikube-addons=registry ...
	I0918 18:57:27.305659  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:27.373676  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:27.418859  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:27.806750  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:27.872233  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:27.916735  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:28.307478  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:28.373092  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:28.416507  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:28.809139  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:28.874249  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:28.922406  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:29.307368  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:29.373282  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:29.416558  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:29.806303  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:29.872973  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:29.919875  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:30.308699  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:30.373261  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:30.417728  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:30.811773  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:30.873465  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:30.917875  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:31.328879  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:31.373629  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:31.418656  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:31.807262  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:31.874746  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:31.917269  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:32.326518  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:32.379111  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:32.417335  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:32.820115  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:32.872955  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:32.917709  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:33.342895  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:33.375514  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:33.420681  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:33.807870  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:33.873095  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:33.917355  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:34.308591  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:34.373976  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:34.418181  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:34.807057  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:34.872510  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:34.917425  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:35.309117  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:35.375381  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:35.417531  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:35.807449  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:35.873109  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:35.919964  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:36.318363  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:36.373049  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:36.417439  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:36.806610  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:36.873530  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:36.918212  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:37.308047  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:37.374030  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:37.417670  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:37.805654  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:37.874369  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:37.918264  648496 kapi.go:107] duration metric: took 1m13.566249636s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0918 18:57:38.312495  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:38.373682  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:38.806415  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:38.873868  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:39.306247  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:39.373070  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:39.808163  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:39.873143  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:40.320775  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:40.373035  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:40.807634  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:40.873351  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:41.305985  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:41.372637  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:41.807432  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:41.877528  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:42.308885  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:42.380667  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:42.806973  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:42.872985  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:43.306234  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:43.372650  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:43.807038  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:43.872762  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:44.306326  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:44.372915  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:44.810602  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:44.873464  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:45.308522  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:45.374476  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:45.806294  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:45.874083  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:46.306679  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:46.373440  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:46.807371  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:46.873034  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:47.306811  648496 kapi.go:107] duration metric: took 1m22.522483742s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0918 18:57:47.372823  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:47.873327  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:48.372318  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:48.872476  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:49.372561  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:49.872389  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:50.372532  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:50.872665  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:51.372709  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:51.873562  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:52.372353  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:52.873194  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:53.372677  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:53.872630  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:54.372800  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:54.875267  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:55.372900  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:55.877685  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:56.372560  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:56.878412  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:57.372279  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:57.873015  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:58.372855  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:58.873589  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:59.373944  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:59.874096  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:58:00.374360  648496 kapi.go:107] duration metric: took 1m31.585287609s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0918 18:58:00.376680  648496 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-351470 cluster.
	I0918 18:58:00.378715  648496 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0918 18:58:00.380591  648496 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0918 18:58:00.382814  648496 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, default-storageclass, inspektor-gadget, metrics-server, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0918 18:58:00.385149  648496 addons.go:502] enable addons completed in 1m41.830459175s: enabled=[cloud-spanner ingress-dns storage-provisioner default-storageclass inspektor-gadget metrics-server volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0918 18:58:00.385211  648496 start.go:233] waiting for cluster config update ...
	I0918 18:58:00.385231  648496 start.go:242] writing updated cluster config ...
	I0918 18:58:00.385570  648496 ssh_runner.go:195] Run: rm -f paused
	I0918 18:58:00.459160  648496 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0918 18:58:00.461669  648496 out.go:177] * Done! kubectl is now configured to use "addons-351470" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 18 19:00:55 addons-351470 crio[892]: time="2023-09-18 19:00:55.689815342Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=bbd64176-ca6b-4182-8d69-40a9b16948b9 name=/runtime.v1.ImageService/ImageStatus
	Sep 18 19:00:55 addons-351470 crio[892]: time="2023-09-18 19:00:55.689999934Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a39a074194753e46f21cfbf0b4253444939f276ed100d23d5fc568ada19a9ebb,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb],Size_:28999826,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=bbd64176-ca6b-4182-8d69-40a9b16948b9 name=/runtime.v1.ImageService/ImageStatus
	Sep 18 19:00:55 addons-351470 crio[892]: time="2023-09-18 19:00:55.690772369Z" level=info msg="Creating container: default/hello-world-app-5d77478584-qf9q9/hello-world-app" id=24027d9f-6b41-40f0-9d1b-9d8fa10cb0ac name=/runtime.v1.RuntimeService/CreateContainer
	Sep 18 19:00:55 addons-351470 crio[892]: time="2023-09-18 19:00:55.690867664Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 18 19:00:55 addons-351470 crio[892]: time="2023-09-18 19:00:55.778696343Z" level=info msg="Created container b6b5cf4cc94daa14d21574f18f8f80a50b717da5596b2309bb407ce91f7416bd: default/hello-world-app-5d77478584-qf9q9/hello-world-app" id=24027d9f-6b41-40f0-9d1b-9d8fa10cb0ac name=/runtime.v1.RuntimeService/CreateContainer
	Sep 18 19:00:55 addons-351470 crio[892]: time="2023-09-18 19:00:55.780021880Z" level=info msg="Starting container: b6b5cf4cc94daa14d21574f18f8f80a50b717da5596b2309bb407ce91f7416bd" id=825029a9-a2fa-4635-b2f8-969a12bec773 name=/runtime.v1.RuntimeService/StartContainer
	Sep 18 19:00:55 addons-351470 conmon[8230]: conmon b6b5cf4cc94daa14d215 <ninfo>: container 8242 exited with status 1
	Sep 18 19:00:55 addons-351470 crio[892]: time="2023-09-18 19:00:55.795247057Z" level=info msg="Started container" PID=8242 containerID=b6b5cf4cc94daa14d21574f18f8f80a50b717da5596b2309bb407ce91f7416bd description=default/hello-world-app-5d77478584-qf9q9/hello-world-app id=825029a9-a2fa-4635-b2f8-969a12bec773 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ab0b733152bf77f74cb382e9ae697d0822235b60126dea3a5beaecc201e5af04
	Sep 18 19:00:56 addons-351470 crio[892]: time="2023-09-18 19:00:56.354031526Z" level=info msg="Stopping container: d5f248fa30b79a0b8b6d9096297881f53f0c746a2b75b77bd3a1a3a3f68623fd (timeout: 2s)" id=b3f7f5d8-e30b-445d-9b34-cb98e2866aab name=/runtime.v1.RuntimeService/StopContainer
	Sep 18 19:00:56 addons-351470 crio[892]: time="2023-09-18 19:00:56.530265281Z" level=info msg="Removing container: b75d517fd2b3b57cba20ff46280bd12faa5f8547c673217a18f29905bab19770" id=e6af6dbb-6362-4d8e-8ce5-52d16828004b name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 18 19:00:56 addons-351470 crio[892]: time="2023-09-18 19:00:56.557752648Z" level=info msg="Removed container b75d517fd2b3b57cba20ff46280bd12faa5f8547c673217a18f29905bab19770: default/hello-world-app-5d77478584-qf9q9/hello-world-app" id=e6af6dbb-6362-4d8e-8ce5-52d16828004b name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 18 19:00:58 addons-351470 crio[892]: time="2023-09-18 19:00:58.363840954Z" level=warning msg="Stopping container d5f248fa30b79a0b8b6d9096297881f53f0c746a2b75b77bd3a1a3a3f68623fd with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=b3f7f5d8-e30b-445d-9b34-cb98e2866aab name=/runtime.v1.RuntimeService/StopContainer
	Sep 18 19:00:58 addons-351470 conmon[4628]: conmon d5f248fa30b79a0b8b6d <ninfo>: container 4639 exited with status 137
	Sep 18 19:00:58 addons-351470 crio[892]: time="2023-09-18 19:00:58.524957627Z" level=info msg="Stopped container d5f248fa30b79a0b8b6d9096297881f53f0c746a2b75b77bd3a1a3a3f68623fd: ingress-nginx/ingress-nginx-controller-798b8b85d7-dgtcd/controller" id=b3f7f5d8-e30b-445d-9b34-cb98e2866aab name=/runtime.v1.RuntimeService/StopContainer
	Sep 18 19:00:58 addons-351470 crio[892]: time="2023-09-18 19:00:58.525569815Z" level=info msg="Stopping pod sandbox: ae71f63696a73db3be481dd616c8a94d76d94850b36e3ec80882349b2d577dae" id=43ec6b42-8e8e-4daf-8441-6cde29333073 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 18 19:00:58 addons-351470 crio[892]: time="2023-09-18 19:00:58.529221995Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-ZXRF4TVDJQNCTJ4K - [0:0]\n:KUBE-HP-6QNWMVYOYB34QEXA - [0:0]\n-X KUBE-HP-ZXRF4TVDJQNCTJ4K\n-X KUBE-HP-6QNWMVYOYB34QEXA\nCOMMIT\n"
	Sep 18 19:00:58 addons-351470 crio[892]: time="2023-09-18 19:00:58.530833820Z" level=info msg="Closing host port tcp:80"
	Sep 18 19:00:58 addons-351470 crio[892]: time="2023-09-18 19:00:58.530886825Z" level=info msg="Closing host port tcp:443"
	Sep 18 19:00:58 addons-351470 crio[892]: time="2023-09-18 19:00:58.532751870Z" level=info msg="Host port tcp:80 does not have an open socket"
	Sep 18 19:00:58 addons-351470 crio[892]: time="2023-09-18 19:00:58.532793273Z" level=info msg="Host port tcp:443 does not have an open socket"
	Sep 18 19:00:58 addons-351470 crio[892]: time="2023-09-18 19:00:58.532977988Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-798b8b85d7-dgtcd Namespace:ingress-nginx ID:ae71f63696a73db3be481dd616c8a94d76d94850b36e3ec80882349b2d577dae UID:44baa8c7-937e-4931-a906-7577d4c2dd24 NetNS:/var/run/netns/483f8b1a-ee51-49bd-a18c-e49c86fffd33 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 18 19:00:58 addons-351470 crio[892]: time="2023-09-18 19:00:58.533123859Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-798b8b85d7-dgtcd from CNI network \"kindnet\" (type=ptp)"
	Sep 18 19:00:58 addons-351470 crio[892]: time="2023-09-18 19:00:58.561496589Z" level=info msg="Stopped pod sandbox: ae71f63696a73db3be481dd616c8a94d76d94850b36e3ec80882349b2d577dae" id=43ec6b42-8e8e-4daf-8441-6cde29333073 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 18 19:00:59 addons-351470 crio[892]: time="2023-09-18 19:00:59.541042031Z" level=info msg="Removing container: d5f248fa30b79a0b8b6d9096297881f53f0c746a2b75b77bd3a1a3a3f68623fd" id=ac17d776-9bef-4887-b09a-5d4d81bd9c9f name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 18 19:00:59 addons-351470 crio[892]: time="2023-09-18 19:00:59.560842643Z" level=info msg="Removed container d5f248fa30b79a0b8b6d9096297881f53f0c746a2b75b77bd3a1a3a3f68623fd: ingress-nginx/ingress-nginx-controller-798b8b85d7-dgtcd/controller" id=ac17d776-9bef-4887-b09a-5d4d81bd9c9f name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b6b5cf4cc94da       a39a074194753e46f21cfbf0b4253444939f276ed100d23d5fc568ada19a9ebb                                                             8 seconds ago       Exited              hello-world-app           2                   ab0b733152bf7       hello-world-app-5d77478584-qf9q9
	780c45c50a065       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                              2 minutes ago       Running             nginx                     0                   fe4de6152eed8       nginx
	1fd8f42713f99       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 3 minutes ago       Running             gcp-auth                  0                   639a7f7dd8df6       gcp-auth-d4c87556c-7tlfs
	909fd770ecd17       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   3 minutes ago       Exited              patch                     0                   8b9d4ffccd3ce       ingress-nginx-admission-patch-ghcbs
	9c6ec3fa3fbb9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b   3 minutes ago       Exited              create                    0                   0f5ae06c0dd24       ingress-nginx-admission-create-ts7wq
	ff91e59590532       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             4 minutes ago       Running             storage-provisioner       0                   f67f4b59fff1b       storage-provisioner
	09640ab64c9e3       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             4 minutes ago       Running             coredns                   0                   dfa5b24be9c2e       coredns-5dd5756b68-hfcps
	bc3bc7a2efc34       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                             4 minutes ago       Running             kindnet-cni               0                   25ad99282e234       kindnet-ndjjv
	9b74f354c3e42       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa                                                             4 minutes ago       Running             kube-proxy                0                   304dabec2b3c4       kube-proxy-f7vqg
	2e4f9411a1317       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c                                                             5 minutes ago       Running             kube-apiserver            0                   36f2aa42dd8f6       kube-apiserver-addons-351470
	aa7951c2ccd7b       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             5 minutes ago       Running             etcd                      0                   b7e51139bb281       etcd-addons-351470
	b9a946790be0a       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7                                                             5 minutes ago       Running             kube-scheduler            0                   d1bc0a4f2eea8       kube-scheduler-addons-351470
	3d0df458cb176       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c                                                             5 minutes ago       Running             kube-controller-manager   0                   5b30b1c7cb5a5       kube-controller-manager-addons-351470
	
	* 
	* ==> coredns [09640ab64c9e3fd8591f1e9c07e99e93fae53168af85c6d774a91d832d0b236e] <==
	* [INFO] 10.244.0.16:49293 - 12589 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000053005s
	[INFO] 10.244.0.16:49293 - 24351 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000054195s
	[INFO] 10.244.0.16:49293 - 12075 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055763s
	[INFO] 10.244.0.16:45242 - 33449 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.001202755s
	[INFO] 10.244.0.16:49293 - 61051 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001244774s
	[INFO] 10.244.0.16:49293 - 11841 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001087785s
	[INFO] 10.244.0.16:49293 - 29301 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000059496s
	[INFO] 10.244.0.16:51969 - 6387 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000101047s
	[INFO] 10.244.0.16:45708 - 26525 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000030851s
	[INFO] 10.244.0.16:45708 - 58511 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000157236s
	[INFO] 10.244.0.16:51969 - 54343 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000042289s
	[INFO] 10.244.0.16:51969 - 43508 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000094499s
	[INFO] 10.244.0.16:45708 - 54202 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000207172s
	[INFO] 10.244.0.16:51969 - 32042 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000044989s
	[INFO] 10.244.0.16:45708 - 754 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000142844s
	[INFO] 10.244.0.16:51969 - 53682 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000140924s
	[INFO] 10.244.0.16:45708 - 24623 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000106962s
	[INFO] 10.244.0.16:51969 - 24652 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000058396s
	[INFO] 10.244.0.16:45708 - 24999 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050306s
	[INFO] 10.244.0.16:51969 - 14051 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001482387s
	[INFO] 10.244.0.16:45708 - 27892 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001658085s
	[INFO] 10.244.0.16:51969 - 40400 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001441994s
	[INFO] 10.244.0.16:45708 - 36318 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001488698s
	[INFO] 10.244.0.16:51969 - 6197 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053728s
	[INFO] 10.244.0.16:45708 - 6424 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000032091s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-351470
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-351470
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36
	                    minikube.k8s.io/name=addons-351470
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_18T18_56_07_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-351470
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Sep 2023 18:56:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-351470
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Sep 2023 19:01:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Sep 2023 18:58:39 +0000   Mon, 18 Sep 2023 18:56:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Sep 2023 18:58:39 +0000   Mon, 18 Sep 2023 18:56:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Sep 2023 18:58:39 +0000   Mon, 18 Sep 2023 18:56:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Sep 2023 18:58:39 +0000   Mon, 18 Sep 2023 18:56:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-351470
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa1a1936bf2243d08bf379cb71b1e695
	  System UUID:                c5c6d9d6-2050-47b3-8715-c5c8506037d3
	  Boot ID:                    43cd75a3-7352-4de5-a11c-da52fa8117dc
	  Kernel Version:             5.15.0-1044-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-qf9q9         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  gcp-auth                    gcp-auth-d4c87556c-7tlfs                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  kube-system                 coredns-5dd5756b68-hfcps                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m44s
	  kube-system                 etcd-addons-351470                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         4m58s
	  kube-system                 kindnet-ndjjv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m45s
	  kube-system                 kube-apiserver-addons-351470             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-controller-manager-addons-351470    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-proxy-f7vqg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m45s
	  kube-system                 kube-scheduler-addons-351470             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m40s                kube-proxy       
	  Normal  Starting                 5m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m6s (x8 over 5m6s)  kubelet          Node addons-351470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m6s (x8 over 5m6s)  kubelet          Node addons-351470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m6s (x8 over 5m6s)  kubelet          Node addons-351470 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m58s                kubelet          Node addons-351470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m58s                kubelet          Node addons-351470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m58s                kubelet          Node addons-351470 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m46s                node-controller  Node addons-351470 event: Registered Node addons-351470 in Controller
	  Normal  NodeReady                4m12s                kubelet          Node addons-351470 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000693] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000932] FS-Cache: N-cookie d=000000003f524057{9p.inode} n=000000000dcb9a4e
	[  +0.001115] FS-Cache: N-key=[8] 'd06eed0000000000'
	[  +0.003589] FS-Cache: Duplicate cookie detected
	[  +0.000756] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001018] FS-Cache: O-cookie d=000000003f524057{9p.inode} n=000000001afa753f
	[  +0.001043] FS-Cache: O-key=[8] 'd06eed0000000000'
	[  +0.000719] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000926] FS-Cache: N-cookie d=000000003f524057{9p.inode} n=0000000041e09c7b
	[  +0.001050] FS-Cache: N-key=[8] 'd06eed0000000000'
	[  +2.717536] FS-Cache: Duplicate cookie detected
	[  +0.000756] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001073] FS-Cache: O-cookie d=000000003f524057{9p.inode} n=00000000bf75ac96
	[  +0.001037] FS-Cache: O-key=[8] 'cf6eed0000000000'
	[  +0.000781] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000928] FS-Cache: N-cookie d=000000003f524057{9p.inode} n=000000000dcb9a4e
	[  +0.001071] FS-Cache: N-key=[8] 'cf6eed0000000000'
	[  +0.385708] FS-Cache: Duplicate cookie detected
	[  +0.000766] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.000934] FS-Cache: O-cookie d=000000003f524057{9p.inode} n=0000000078afb02a
	[  +0.001146] FS-Cache: O-key=[8] 'd56eed0000000000'
	[  +0.000766] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000936] FS-Cache: N-cookie d=000000003f524057{9p.inode} n=0000000051123a3d
	[  +0.001113] FS-Cache: N-key=[8] 'd56eed0000000000'
	[ +26.862938] new mount options do not match the existing superblock, will be ignored
	
	* 
	* ==> etcd [aa7951c2ccd7ba064436381289baf8c319bc9403a2669b598c4ea318e47aad2e] <==
	* {"level":"info","ts":"2023-09-18T18:56:19.245064Z","caller":"traceutil/trace.go:171","msg":"trace[2035749418] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:350; }","duration":"184.447071ms","start":"2023-09-18T18:56:19.060604Z","end":"2023-09-18T18:56:19.245051Z","steps":["trace[2035749418] 'agreement among raft nodes before linearized reading'  (duration: 181.747995ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-18T18:56:19.245266Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.689148ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2023-09-18T18:56:19.245296Z","caller":"traceutil/trace.go:171","msg":"trace[1487117599] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:350; }","duration":"184.723158ms","start":"2023-09-18T18:56:19.060567Z","end":"2023-09-18T18:56:19.24529Z","steps":["trace[1487117599] 'agreement among raft nodes before linearized reading'  (duration: 176.561997ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-18T18:56:19.396538Z","caller":"traceutil/trace.go:171","msg":"trace[710800917] transaction","detail":"{read_only:false; response_revision:357; number_of_response:1; }","duration":"107.00896ms","start":"2023-09-18T18:56:19.289494Z","end":"2023-09-18T18:56:19.396503Z","steps":["trace[710800917] 'process raft request'  (duration: 106.9605ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-18T18:56:19.396953Z","caller":"traceutil/trace.go:171","msg":"trace[2068315121] transaction","detail":"{read_only:false; response_revision:356; number_of_response:1; }","duration":"128.802083ms","start":"2023-09-18T18:56:19.26814Z","end":"2023-09-18T18:56:19.396942Z","steps":["trace[2068315121] 'process raft request'  (duration: 125.107063ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-18T18:56:21.638119Z","caller":"traceutil/trace.go:171","msg":"trace[962160822] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"125.255429ms","start":"2023-09-18T18:56:21.512848Z","end":"2023-09-18T18:56:21.638104Z","steps":["trace[962160822] 'process raft request'  (duration: 119.042315ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-18T18:56:22.146153Z","caller":"traceutil/trace.go:171","msg":"trace[963383276] linearizableReadLoop","detail":"{readStateIndex:402; appliedIndex:401; }","duration":"174.106508ms","start":"2023-09-18T18:56:21.972022Z","end":"2023-09-18T18:56:22.146129Z","steps":["trace[963383276] 'read index received'  (duration: 136.932836ms)","trace[963383276] 'applied index is now lower than readState.Index'  (duration: 37.171728ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-18T18:56:22.177815Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.712333ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-18T18:56:22.178677Z","caller":"traceutil/trace.go:171","msg":"trace[419712821] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:391; }","duration":"208.582664ms","start":"2023-09-18T18:56:21.970076Z","end":"2023-09-18T18:56:22.178659Z","steps":["trace[419712821] 'agreement among raft nodes before linearized reading'  (duration: 176.122905ms)","trace[419712821] 'range keys from in-memory index tree'  (duration: 30.588397ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-18T18:56:22.210504Z","caller":"traceutil/trace.go:171","msg":"trace[561903903] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"120.034591ms","start":"2023-09-18T18:56:22.080856Z","end":"2023-09-18T18:56:22.200891Z","steps":["trace[561903903] 'process raft request'  (duration: 50.295953ms)","trace[561903903] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/replicasets/kube-system/coredns-5dd5756b68; req_size:3734; } (duration: 15.705088ms)","trace[561903903] 'marshal mvccpb.KeyValue' {req_type:put; key:/registry/replicasets/kube-system/coredns-5dd5756b68; req_size:3734; } (duration: 35.622017ms)"],"step_count":3}
	{"level":"info","ts":"2023-09-18T18:56:22.182975Z","caller":"traceutil/trace.go:171","msg":"trace[558194179] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"212.821055ms","start":"2023-09-18T18:56:21.970136Z","end":"2023-09-18T18:56:22.182957Z","steps":["trace[558194179] 'process raft request'  (duration: 138.904482ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-18T18:56:22.21763Z","caller":"traceutil/trace.go:171","msg":"trace[923681165] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"126.730367ms","start":"2023-09-18T18:56:22.090885Z","end":"2023-09-18T18:56:22.217615Z","steps":["trace[923681165] 'process raft request'  (duration: 91.934504ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-18T18:56:22.216903Z","caller":"traceutil/trace.go:171","msg":"trace[1575384881] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"125.751423ms","start":"2023-09-18T18:56:22.091136Z","end":"2023-09-18T18:56:22.216887Z","steps":["trace[1575384881] 'process raft request'  (duration: 92.000777ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-18T18:56:22.276021Z","caller":"traceutil/trace.go:171","msg":"trace[654307322] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"165.139762ms","start":"2023-09-18T18:56:22.110857Z","end":"2023-09-18T18:56:22.275997Z","steps":["trace[654307322] 'process raft request'  (duration: 105.836842ms)","trace[654307322] 'compare'  (duration: 40.675739ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-18T18:56:22.276415Z","caller":"traceutil/trace.go:171","msg":"trace[875341937] linearizableReadLoop","detail":"{readStateIndex:406; appliedIndex:405; }","duration":"118.063922ms","start":"2023-09-18T18:56:22.158342Z","end":"2023-09-18T18:56:22.276406Z","steps":["trace[875341937] 'read index received'  (duration: 47.150221ms)","trace[875341937] 'applied index is now lower than readState.Index'  (duration: 70.912692ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-18T18:56:22.276973Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.84103ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/default/\" range_end:\"/registry/limitranges/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-18T18:56:22.29902Z","caller":"traceutil/trace.go:171","msg":"trace[612501840] range","detail":"{range_begin:/registry/limitranges/default/; range_end:/registry/limitranges/default0; response_count:0; response_revision:397; }","duration":"151.873998ms","start":"2023-09-18T18:56:22.147114Z","end":"2023-09-18T18:56:22.298988Z","steps":["trace[612501840] 'agreement among raft nodes before linearized reading'  (duration: 129.824226ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-18T18:56:22.277006Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.322588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-18T18:56:22.299247Z","caller":"traceutil/trace.go:171","msg":"trace[2039677809] range","detail":"{range_begin:/registry/clusterroles/minikube-ingress-dns; range_end:; response_count:0; response_revision:397; }","duration":"188.551307ms","start":"2023-09-18T18:56:22.110679Z","end":"2023-09-18T18:56:22.29923Z","steps":["trace[2039677809] 'agreement among raft nodes before linearized reading'  (duration: 166.313152ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-18T18:56:22.277027Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.932675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-18T18:56:22.29938Z","caller":"traceutil/trace.go:171","msg":"trace[835302840] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:397; }","duration":"208.283134ms","start":"2023-09-18T18:56:22.09109Z","end":"2023-09-18T18:56:22.299373Z","steps":["trace[835302840] 'agreement among raft nodes before linearized reading'  (duration: 185.923723ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-18T18:56:22.27706Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.250624ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-18T18:56:22.299531Z","caller":"traceutil/trace.go:171","msg":"trace[1036173628] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:397; }","duration":"218.728493ms","start":"2023-09-18T18:56:22.080796Z","end":"2023-09-18T18:56:22.299524Z","steps":["trace[1036173628] 'agreement among raft nodes before linearized reading'  (duration: 196.241927ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-18T18:56:22.300118Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.772043ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-18T18:56:22.300163Z","caller":"traceutil/trace.go:171","msg":"trace[547363066] range","detail":"{range_begin:/registry/controllers/kube-system/registry; range_end:; response_count:0; response_revision:398; }","duration":"103.829094ms","start":"2023-09-18T18:56:22.196328Z","end":"2023-09-18T18:56:22.300157Z","steps":["trace[547363066] 'agreement among raft nodes before linearized reading'  (duration: 103.743341ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [1fd8f42713f99d9bf278658a3bddc08632d8a5184dcf5796706a3fb27d74d102] <==
	* 2023/09/18 18:57:59 GCP Auth Webhook started!
	2023/09/18 18:58:10 Ready to marshal response ...
	2023/09/18 18:58:10 Ready to write response ...
	2023/09/18 18:58:12 Ready to marshal response ...
	2023/09/18 18:58:12 Ready to write response ...
	2023/09/18 18:58:18 Ready to marshal response ...
	2023/09/18 18:58:18 Ready to write response ...
	2023/09/18 18:58:38 Ready to marshal response ...
	2023/09/18 18:58:38 Ready to write response ...
	2023/09/18 19:00:38 Ready to marshal response ...
	2023/09/18 19:00:38 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  19:01:04 up  2:43,  0 users,  load average: 0.36, 1.42, 1.96
	Linux addons-351470 5.15.0-1044-aws #49~20.04.1-Ubuntu SMP Mon Aug 21 17:10:24 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [bc3bc7a2efc3455ae2c556d097f198d3b762b97baec3db87dafb598883ba6f4f] <==
	* I0918 18:59:02.482496       1 main.go:227] handling current node
	I0918 18:59:12.493093       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 18:59:12.493119       1 main.go:227] handling current node
	I0918 18:59:22.505228       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 18:59:22.505260       1 main.go:227] handling current node
	I0918 18:59:32.509816       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 18:59:32.509856       1 main.go:227] handling current node
	I0918 18:59:42.520548       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 18:59:42.520580       1 main.go:227] handling current node
	I0918 18:59:52.524808       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 18:59:52.524965       1 main.go:227] handling current node
	I0918 19:00:02.536131       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 19:00:02.536162       1 main.go:227] handling current node
	I0918 19:00:12.548530       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 19:00:12.548555       1 main.go:227] handling current node
	I0918 19:00:22.560041       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 19:00:22.560371       1 main.go:227] handling current node
	I0918 19:00:32.564248       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 19:00:32.564279       1 main.go:227] handling current node
	I0918 19:00:42.577461       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 19:00:42.577492       1 main.go:227] handling current node
	I0918 19:00:52.581565       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 19:00:52.581594       1 main.go:227] handling current node
	I0918 19:01:02.592228       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 19:01:02.592254       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [2e4f9411a13174f2468bbd89045133116db4a2be404c11eed2c4dd236d814c07] <==
	* I0918 18:58:54.107232       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0918 18:58:54.116663       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0918 18:58:54.116806       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0918 18:58:54.381929       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x4004151560), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400b726e60), ResponseWriter:(*httpsnoop.rw)(0x400b726e60), Flusher:(*httpsnoop.rw)(0x400b726e60), CloseNotifier:(*httpsnoop.rw)(0x400b726e60), Pusher:(*httpsnoop.rw)(0x400b726e60)}}, encoder:(*versioning.codec)(0x4008403180), memAllocator:(*runtime.Allocator)(0x4006222ed0)})
	W0918 18:58:55.096408       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0918 18:58:55.118117       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0918 18:58:55.126997       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0918 18:59:03.349183       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","workload-high","workload-low","global-default","catch-all","exempt","system","node-high"] items=[{},{},{},{},{},{},{},{}]
	I0918 18:59:05.954509       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	I0918 18:59:05.960998       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0918 18:59:06.981824       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	E0918 18:59:13.350121       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","global-default","catch-all","exempt","system","node-high","leader-election"] items=[{},{},{},{},{},{},{},{}]
	I0918 18:59:18.620919       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0918 18:59:23.350356       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	E0918 18:59:33.351107       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	E0918 18:59:43.351356       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","system","node-high","leader-election","workload-high","workload-low","global-default"] items=[{},{},{},{},{},{},{},{}]
	E0918 18:59:53.352259       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","system","node-high","leader-election","workload-high","workload-low","global-default"] items=[{},{},{},{},{},{},{},{}]
	E0918 19:00:03.353543       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	E0918 19:00:13.354374       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","system","node-high","leader-election","workload-high","workload-low","global-default","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E0918 19:00:23.355218       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E0918 19:00:33.355996       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","global-default","catch-all","exempt","system","node-high","leader-election","workload-high"] items=[{},{},{},{},{},{},{},{}]
	I0918 19:00:38.294336       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.247.81"}
	E0918 19:00:43.356653       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","system","node-high","leader-election","workload-high","workload-low","global-default"] items=[{},{},{},{},{},{},{},{}]
	E0918 19:00:53.357532       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","system","node-high","leader-election","workload-high","workload-low","global-default"] items=[{},{},{},{},{},{},{},{}]
	E0918 19:01:03.358354       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","system","node-high","leader-election","workload-high","workload-low","global-default","catch-all"] items=[{},{},{},{},{},{},{},{}]
	
	* 
	* ==> kube-controller-manager [3d0df458cb176112d7f53062169e23e1749c27999712b327814b4c98c095df80] <==
	* W0918 19:00:04.522532       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:00:04.522567       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:00:15.265883       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:00:15.265923       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:00:17.092942       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:00:17.092976       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:00:37.623058       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:00:37.623094       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0918 19:00:38.006767       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0918 19:00:38.040417       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-qf9q9"
	I0918 19:00:38.056449       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="51.038679ms"
	I0918 19:00:38.063613       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="6.450916ms"
	I0918 19:00:38.063841       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.792µs"
	I0918 19:00:38.080440       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="185.289µs"
	I0918 19:00:40.508200       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="38.441µs"
	I0918 19:00:41.529806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="94.384µs"
	I0918 19:00:42.507761       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="89.969µs"
	I0918 19:00:55.322230       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0918 19:00:55.326405       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-798b8b85d7" duration="4.57µs"
	I0918 19:00:55.334496       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0918 19:00:56.555704       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="44.677µs"
	W0918 19:01:03.574579       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:01:03.574610       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0918 19:01:03.971064       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0918 19:01:03.971105       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [9b74f354c3e42b0a24d3b6ed9117840479ba1971081f7d182d6e3d55af67b335] <==
	* I0918 18:56:23.655695       1 server_others.go:69] "Using iptables proxy"
	I0918 18:56:23.724114       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0918 18:56:23.825182       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0918 18:56:23.830560       1 server_others.go:152] "Using iptables Proxier"
	I0918 18:56:23.830783       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0918 18:56:23.830880       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0918 18:56:23.830936       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0918 18:56:23.831176       1 server.go:846] "Version info" version="v1.28.2"
	I0918 18:56:23.831186       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 18:56:23.834170       1 config.go:188] "Starting service config controller"
	I0918 18:56:23.834571       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0918 18:56:23.834678       1 config.go:97] "Starting endpoint slice config controller"
	I0918 18:56:23.834712       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0918 18:56:23.835268       1 config.go:315] "Starting node config controller"
	I0918 18:56:23.836463       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0918 18:56:23.935346       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0918 18:56:23.935480       1 shared_informer.go:318] Caches are synced for service config
	I0918 18:56:23.938858       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [b9a946790be0ac81dae9060f7ee78cb6ec1b785ba8f4f3c6bd3c17f0779af07e] <==
	* W0918 18:56:03.285621       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 18:56:03.285658       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0918 18:56:03.285738       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 18:56:03.285774       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0918 18:56:03.285853       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 18:56:03.285888       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0918 18:56:03.285956       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 18:56:03.285991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0918 18:56:03.312088       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 18:56:03.312714       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 18:56:04.160069       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 18:56:04.160196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0918 18:56:04.215613       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 18:56:04.215735       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0918 18:56:04.225477       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 18:56:04.225585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0918 18:56:04.233759       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 18:56:04.233917       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 18:56:04.269060       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 18:56:04.269176       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0918 18:56:04.282826       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 18:56:04.282959       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0918 18:56:04.407820       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 18:56:04.407883       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0918 18:56:05.964055       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Sep 18 19:00:42 addons-351470 kubelet[1355]: I0918 19:00:42.495938    1355 scope.go:117] "RemoveContainer" containerID="b75d517fd2b3b57cba20ff46280bd12faa5f8547c673217a18f29905bab19770"
	Sep 18 19:00:42 addons-351470 kubelet[1355]: E0918 19:00:42.496212    1355 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-qf9q9_default(8f2692c2-bc72-44f1-9820-177693ab1843)\"" pod="default/hello-world-app-5d77478584-qf9q9" podUID="8f2692c2-bc72-44f1-9820-177693ab1843"
	Sep 18 19:00:47 addons-351470 kubelet[1355]: I0918 19:00:47.687543    1355 scope.go:117] "RemoveContainer" containerID="7eff652c30fa10366f2dd0864f6981a5501777a178e6da97d49bc7598f9ea59b"
	Sep 18 19:00:47 addons-351470 kubelet[1355]: E0918 19:00:47.687867    1355 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(38dc4cd8-2f37-4e52-a13c-99f3ec43b6b1)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="38dc4cd8-2f37-4e52-a13c-99f3ec43b6b1"
	Sep 18 19:00:53 addons-351470 kubelet[1355]: E0918 19:00:53.502778    1355 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c11cfd68f63035bf530ecb371a353220066516b7b28183b70e55999afb7a8997/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c11cfd68f63035bf530ecb371a353220066516b7b28183b70e55999afb7a8997/diff: no such file or directory, extraDiskErr: <nil>
	Sep 18 19:00:54 addons-351470 kubelet[1355]: I0918 19:00:54.333142    1355 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn5w5\" (UniqueName: \"kubernetes.io/projected/38dc4cd8-2f37-4e52-a13c-99f3ec43b6b1-kube-api-access-rn5w5\") pod \"38dc4cd8-2f37-4e52-a13c-99f3ec43b6b1\" (UID: \"38dc4cd8-2f37-4e52-a13c-99f3ec43b6b1\") "
	Sep 18 19:00:54 addons-351470 kubelet[1355]: I0918 19:00:54.335972    1355 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38dc4cd8-2f37-4e52-a13c-99f3ec43b6b1-kube-api-access-rn5w5" (OuterVolumeSpecName: "kube-api-access-rn5w5") pod "38dc4cd8-2f37-4e52-a13c-99f3ec43b6b1" (UID: "38dc4cd8-2f37-4e52-a13c-99f3ec43b6b1"). InnerVolumeSpecName "kube-api-access-rn5w5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:00:54 addons-351470 kubelet[1355]: I0918 19:00:54.433445    1355 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rn5w5\" (UniqueName: \"kubernetes.io/projected/38dc4cd8-2f37-4e52-a13c-99f3ec43b6b1-kube-api-access-rn5w5\") on node \"addons-351470\" DevicePath \"\""
	Sep 18 19:00:54 addons-351470 kubelet[1355]: I0918 19:00:54.520448    1355 scope.go:117] "RemoveContainer" containerID="7eff652c30fa10366f2dd0864f6981a5501777a178e6da97d49bc7598f9ea59b"
	Sep 18 19:00:54 addons-351470 kubelet[1355]: I0918 19:00:54.689599    1355 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="38dc4cd8-2f37-4e52-a13c-99f3ec43b6b1" path="/var/lib/kubelet/pods/38dc4cd8-2f37-4e52-a13c-99f3ec43b6b1/volumes"
	Sep 18 19:00:54 addons-351470 kubelet[1355]: E0918 19:00:54.889557    1355 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7cd5695c3fffc37a209f4a557fe128a1eb5a00532466887ca668881ea420d7b5/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7cd5695c3fffc37a209f4a557fe128a1eb5a00532466887ca668881ea420d7b5/diff: no such file or directory, extraDiskErr: <nil>
	Sep 18 19:00:55 addons-351470 kubelet[1355]: I0918 19:00:55.687828    1355 scope.go:117] "RemoveContainer" containerID="b75d517fd2b3b57cba20ff46280bd12faa5f8547c673217a18f29905bab19770"
	Sep 18 19:00:56 addons-351470 kubelet[1355]: I0918 19:00:56.528243    1355 scope.go:117] "RemoveContainer" containerID="b75d517fd2b3b57cba20ff46280bd12faa5f8547c673217a18f29905bab19770"
	Sep 18 19:00:56 addons-351470 kubelet[1355]: I0918 19:00:56.528539    1355 scope.go:117] "RemoveContainer" containerID="b6b5cf4cc94daa14d21574f18f8f80a50b717da5596b2309bb407ce91f7416bd"
	Sep 18 19:00:56 addons-351470 kubelet[1355]: E0918 19:00:56.528802    1355 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-qf9q9_default(8f2692c2-bc72-44f1-9820-177693ab1843)\"" pod="default/hello-world-app-5d77478584-qf9q9" podUID="8f2692c2-bc72-44f1-9820-177693ab1843"
	Sep 18 19:00:56 addons-351470 kubelet[1355]: I0918 19:00:56.688957    1355 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="06d5db17-115a-4bf4-8031-0e2e563b4f56" path="/var/lib/kubelet/pods/06d5db17-115a-4bf4-8031-0e2e563b4f56/volumes"
	Sep 18 19:00:56 addons-351470 kubelet[1355]: I0918 19:00:56.689353    1355 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="93b99aea-df66-4ee6-947e-d477b251030c" path="/var/lib/kubelet/pods/93b99aea-df66-4ee6-947e-d477b251030c/volumes"
	Sep 18 19:00:58 addons-351470 kubelet[1355]: I0918 19:00:58.667879    1355 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2llp\" (UniqueName: \"kubernetes.io/projected/44baa8c7-937e-4931-a906-7577d4c2dd24-kube-api-access-b2llp\") pod \"44baa8c7-937e-4931-a906-7577d4c2dd24\" (UID: \"44baa8c7-937e-4931-a906-7577d4c2dd24\") "
	Sep 18 19:00:58 addons-351470 kubelet[1355]: I0918 19:00:58.667943    1355 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44baa8c7-937e-4931-a906-7577d4c2dd24-webhook-cert\") pod \"44baa8c7-937e-4931-a906-7577d4c2dd24\" (UID: \"44baa8c7-937e-4931-a906-7577d4c2dd24\") "
	Sep 18 19:00:58 addons-351470 kubelet[1355]: I0918 19:00:58.675452    1355 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44baa8c7-937e-4931-a906-7577d4c2dd24-kube-api-access-b2llp" (OuterVolumeSpecName: "kube-api-access-b2llp") pod "44baa8c7-937e-4931-a906-7577d4c2dd24" (UID: "44baa8c7-937e-4931-a906-7577d4c2dd24"). InnerVolumeSpecName "kube-api-access-b2llp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 19:00:58 addons-351470 kubelet[1355]: I0918 19:00:58.675561    1355 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44baa8c7-937e-4931-a906-7577d4c2dd24-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "44baa8c7-937e-4931-a906-7577d4c2dd24" (UID: "44baa8c7-937e-4931-a906-7577d4c2dd24"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 18 19:00:58 addons-351470 kubelet[1355]: I0918 19:00:58.689432    1355 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="44baa8c7-937e-4931-a906-7577d4c2dd24" path="/var/lib/kubelet/pods/44baa8c7-937e-4931-a906-7577d4c2dd24/volumes"
	Sep 18 19:00:58 addons-351470 kubelet[1355]: I0918 19:00:58.768993    1355 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-b2llp\" (UniqueName: \"kubernetes.io/projected/44baa8c7-937e-4931-a906-7577d4c2dd24-kube-api-access-b2llp\") on node \"addons-351470\" DevicePath \"\""
	Sep 18 19:00:58 addons-351470 kubelet[1355]: I0918 19:00:58.769037    1355 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44baa8c7-937e-4931-a906-7577d4c2dd24-webhook-cert\") on node \"addons-351470\" DevicePath \"\""
	Sep 18 19:00:59 addons-351470 kubelet[1355]: I0918 19:00:59.539618    1355 scope.go:117] "RemoveContainer" containerID="d5f248fa30b79a0b8b6d9096297881f53f0c746a2b75b77bd3a1a3a3f68623fd"
	
	* 
	* ==> storage-provisioner [ff91e59590532e6807bbf0754b199da42ea10c81bbc85bfccd729d1dabf8256b] <==
	* I0918 18:56:53.596983       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 18:56:53.675288       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 18:56:53.675388       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 18:56:53.684952       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 18:56:53.685143       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-351470_fbf979bb-7e69-4903-9fa1-d5de07fb11f6!
	I0918 18:56:53.687458       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"89132dcf-7876-4f00-b25d-71dc01ae6fa5", APIVersion:"v1", ResourceVersion:"827", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-351470_fbf979bb-7e69-4903-9fa1-d5de07fb11f6 became leader
	I0918 18:56:53.786101       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-351470_fbf979bb-7e69-4903-9fa1-d5de07fb11f6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-351470 -n addons-351470
helpers_test.go:261: (dbg) Run:  kubectl --context addons-351470 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (168.02s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-351470 --alsologtostderr -v=1
addons_test.go:800: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-351470 --alsologtostderr -v=1: exit status 11 (552.896377ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 18:58:06.361376  654815 out.go:296] Setting OutFile to fd 1 ...
	I0918 18:58:06.362594  654815 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 18:58:06.362607  654815 out.go:309] Setting ErrFile to fd 2...
	I0918 18:58:06.362613  654815 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 18:58:06.362940  654815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17263-642665/.minikube/bin
	I0918 18:58:06.363724  654815 config.go:182] Loaded profile config "addons-351470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0918 18:58:06.363840  654815 addons.go:594] checking whether the cluster is paused
	I0918 18:58:06.363984  654815 config.go:182] Loaded profile config "addons-351470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0918 18:58:06.364028  654815 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:58:06.364625  654815 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:58:06.390946  654815 ssh_runner.go:195] Run: systemctl --version
	I0918 18:58:06.391017  654815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:58:06.413799  654815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:58:06.509447  654815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 18:58:06.509618  654815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 18:58:06.558269  654815 cri.go:89] found id: "e259a9b96620de8c65296ac7e19746227c99d8938415365981914197971ea99c"
	I0918 18:58:06.558294  654815 cri.go:89] found id: "3baca1f7777bffc23ad7847d64a7a559c94dd2655db455ddf6386ed883a461ae"
	I0918 18:58:06.558302  654815 cri.go:89] found id: "50144b7e55c5da7756762f699bf2a2dfc8c474130359ebfc1b73efefba243f56"
	I0918 18:58:06.558307  654815 cri.go:89] found id: "7325764fc9c670a62a5b2dbfaeb37a0bc618b47642de6422883ef21ab12c9da2"
	I0918 18:58:06.558311  654815 cri.go:89] found id: "e0d464e332307757e09c9ac3314c4451eb861286bffa7b2d0f9cbc160028ed2e"
	I0918 18:58:06.558316  654815 cri.go:89] found id: "3e677433b5b07a5789f4f9bdffb057d3d03d2053c59e42bde4424e9660d80da8"
	I0918 18:58:06.558320  654815 cri.go:89] found id: "616c613f76e223e9b84d9a5f51cb55f8813e9b3d29542bc0e5380acf0371bd4f"
	I0918 18:58:06.558324  654815 cri.go:89] found id: "4a8ef7d1c7b42a4468c14e431d53ef72a377639659f0b311c3d58c27c854f25e"
	I0918 18:58:06.558329  654815 cri.go:89] found id: "175f41a8a3fa4efb19044665ce3f64063d598e574ec4e3ab3905b9d274126604"
	I0918 18:58:06.558336  654815 cri.go:89] found id: "2d89c0a605330296aa9a64a81fb0223d3eed4d9dcce7adaf59b360100191a62d"
	I0918 18:58:06.558343  654815 cri.go:89] found id: "584e064431c032ce7322b3979736f04bb1b830c99d35627e75bdac003d7f2e90"
	I0918 18:58:06.558354  654815 cri.go:89] found id: "1b94bbb6d7158f790d8afa4e099d28c14cade2e7d8b7b8d80fa62b846229f03f"
	I0918 18:58:06.558358  654815 cri.go:89] found id: "2496ab92ce3b3ea9bc89792e1719d1a69ec26cab1c4a036a41e16501dbdf3c5b"
	I0918 18:58:06.558369  654815 cri.go:89] found id: "54c9c80f37d714318ca3f5e0790ca0c10a77824a67c565d609522e4c6c05599c"
	I0918 18:58:06.558375  654815 cri.go:89] found id: "ff91e59590532e6807bbf0754b199da42ea10c81bbc85bfccd729d1dabf8256b"
	I0918 18:58:06.558379  654815 cri.go:89] found id: "09640ab64c9e3fd8591f1e9c07e99e93fae53168af85c6d774a91d832d0b236e"
	I0918 18:58:06.558385  654815 cri.go:89] found id: "bc3bc7a2efc3455ae2c556d097f198d3b762b97baec3db87dafb598883ba6f4f"
	I0918 18:58:06.558390  654815 cri.go:89] found id: "9b74f354c3e42b0a24d3b6ed9117840479ba1971081f7d182d6e3d55af67b335"
	I0918 18:58:06.558394  654815 cri.go:89] found id: "2e4f9411a13174f2468bbd89045133116db4a2be404c11eed2c4dd236d814c07"
	I0918 18:58:06.558398  654815 cri.go:89] found id: "aa7951c2ccd7ba064436381289baf8c319bc9403a2669b598c4ea318e47aad2e"
	I0918 18:58:06.558405  654815 cri.go:89] found id: "b9a946790be0ac81dae9060f7ee78cb6ec1b785ba8f4f3c6bd3c17f0779af07e"
	I0918 18:58:06.558412  654815 cri.go:89] found id: "3d0df458cb176112d7f53062169e23e1749c27999712b327814b4c98c095df80"
	I0918 18:58:06.558416  654815 cri.go:89] found id: ""
	I0918 18:58:06.558470  654815 ssh_runner.go:195] Run: sudo runc list -f json
	I0918 18:58:06.613119  654815 out.go:177] 
	W0918 18:58:06.615272  654815 out.go:239] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-09-18T18:58:06Z" level=error msg="stat /run/runc/ccd2c967eda2e96854237ab8a74c1a129efd530198405b0d1575510a228f1358: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-09-18T18:58:06Z" level=error msg="stat /run/runc/ccd2c967eda2e96854237ab8a74c1a129efd530198405b0d1575510a228f1358: no such file or directory"
	
	W0918 18:58:06.615319  654815 out.go:239] * 
	* 
	W0918 18:58:06.834583  654815 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 18:58:06.838465  654815 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:802: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-351470 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-351470
helpers_test.go:235: (dbg) docker inspect addons-351470:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2b54c6ee76a06eb4c585cf18003ceeccee467f7f9e95cd51bbc7284a6ae81c0e",
	        "Created": "2023-09-18T18:55:37.672543127Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 648961,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-18T18:55:37.995599906Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:560a33002deec07a703a16e2b1dbf6aecde4c0d46aaefa1cb6df4c8c8a7774a7",
	        "ResolvConfPath": "/var/lib/docker/containers/2b54c6ee76a06eb4c585cf18003ceeccee467f7f9e95cd51bbc7284a6ae81c0e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2b54c6ee76a06eb4c585cf18003ceeccee467f7f9e95cd51bbc7284a6ae81c0e/hostname",
	        "HostsPath": "/var/lib/docker/containers/2b54c6ee76a06eb4c585cf18003ceeccee467f7f9e95cd51bbc7284a6ae81c0e/hosts",
	        "LogPath": "/var/lib/docker/containers/2b54c6ee76a06eb4c585cf18003ceeccee467f7f9e95cd51bbc7284a6ae81c0e/2b54c6ee76a06eb4c585cf18003ceeccee467f7f9e95cd51bbc7284a6ae81c0e-json.log",
	        "Name": "/addons-351470",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-351470:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-351470",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/302d4128170298c9a49dfc6c566ed77fe8ec771cd64821ba9f1f3dc979ecd671-init/diff:/var/lib/docker/overlay2/4e03e4714bce8b0ad83859c0e431c5abac0520d3520e787a29bac63ee8779cc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/302d4128170298c9a49dfc6c566ed77fe8ec771cd64821ba9f1f3dc979ecd671/merged",
	                "UpperDir": "/var/lib/docker/overlay2/302d4128170298c9a49dfc6c566ed77fe8ec771cd64821ba9f1f3dc979ecd671/diff",
	                "WorkDir": "/var/lib/docker/overlay2/302d4128170298c9a49dfc6c566ed77fe8ec771cd64821ba9f1f3dc979ecd671/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-351470",
	                "Source": "/var/lib/docker/volumes/addons-351470/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-351470",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-351470",
	                "name.minikube.sigs.k8s.io": "addons-351470",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6be29c99e8fe7b80b985892e859f8abb52f6b9e392f2d2e0b40a201bfaf362d7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33411"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33412"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6be29c99e8fe",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-351470": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2b54c6ee76a0",
	                        "addons-351470"
	                    ],
	                    "NetworkID": "c52a98ebb1827bd9b5c5e2fd668d96c6487b504e8c475a0cff92e03a24d9fcd2",
	                    "EndpointID": "28e6915999bc31941809b2377f905912b998922a2caca62098462683a86f52d9",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-351470 -n addons-351470
helpers_test.go:244: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-351470 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-351470 logs -n 25: (1.821057384s)
helpers_test.go:252: TestAddons/parallel/Headlamp logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-623514   | jenkins | v1.31.2 | 18 Sep 23 18:54 UTC |                     |
	|         | -p download-only-623514        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-623514   | jenkins | v1.31.2 | 18 Sep 23 18:54 UTC |                     |
	|         | -p download-only-623514        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.31.2 | 18 Sep 23 18:55 UTC | 18 Sep 23 18:55 UTC |
	| delete  | -p download-only-623514        | download-only-623514   | jenkins | v1.31.2 | 18 Sep 23 18:55 UTC | 18 Sep 23 18:55 UTC |
	| delete  | -p download-only-623514        | download-only-623514   | jenkins | v1.31.2 | 18 Sep 23 18:55 UTC | 18 Sep 23 18:55 UTC |
	| start   | --download-only -p             | download-docker-150608 | jenkins | v1.31.2 | 18 Sep 23 18:55 UTC |                     |
	|         | download-docker-150608         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-150608      | download-docker-150608 | jenkins | v1.31.2 | 18 Sep 23 18:55 UTC | 18 Sep 23 18:55 UTC |
	| start   | --download-only -p             | binary-mirror-476016   | jenkins | v1.31.2 | 18 Sep 23 18:55 UTC |                     |
	|         | binary-mirror-476016           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:35741         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-476016        | binary-mirror-476016   | jenkins | v1.31.2 | 18 Sep 23 18:55 UTC | 18 Sep 23 18:55 UTC |
	| start   | -p addons-351470               | addons-351470          | jenkins | v1.31.2 | 18 Sep 23 18:55 UTC | 18 Sep 23 18:58 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-351470          | jenkins | v1.31.2 | 18 Sep 23 18:58 UTC | 18 Sep 23 18:58 UTC |
	|         | addons-351470                  |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-351470          | jenkins | v1.31.2 | 18 Sep 23 18:58 UTC |                     |
	|         | -p addons-351470               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/18 18:55:14
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 18:55:14.129571  648496 out.go:296] Setting OutFile to fd 1 ...
	I0918 18:55:14.129754  648496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 18:55:14.129762  648496 out.go:309] Setting ErrFile to fd 2...
	I0918 18:55:14.129768  648496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 18:55:14.130038  648496 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17263-642665/.minikube/bin
	I0918 18:55:14.130586  648496 out.go:303] Setting JSON to false
	I0918 18:55:14.131543  648496 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9460,"bootTime":1695053855,"procs":237,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0918 18:55:14.131622  648496 start.go:138] virtualization:  
	I0918 18:55:14.144289  648496 out.go:177] * [addons-351470] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0918 18:55:14.150729  648496 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 18:55:14.152979  648496 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 18:55:14.150853  648496 notify.go:220] Checking for updates...
	I0918 18:55:14.158372  648496 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 18:55:14.160760  648496 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	I0918 18:55:14.162694  648496 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 18:55:14.165067  648496 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 18:55:14.167501  648496 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 18:55:14.194602  648496 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0918 18:55:14.194724  648496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 18:55:14.291669  648496 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-09-18 18:55:14.281895585 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 18:55:14.291806  648496 docker.go:294] overlay module found
	I0918 18:55:14.295925  648496 out.go:177] * Using the docker driver based on user configuration
	I0918 18:55:14.298139  648496 start.go:298] selected driver: docker
	I0918 18:55:14.298155  648496 start.go:902] validating driver "docker" against <nil>
	I0918 18:55:14.298167  648496 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 18:55:14.298804  648496 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 18:55:14.366319  648496 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-09-18 18:55:14.355335125 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 18:55:14.366499  648496 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 18:55:14.366758  648496 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 18:55:14.368950  648496 out.go:177] * Using Docker driver with root privileges
	I0918 18:55:14.371169  648496 cni.go:84] Creating CNI manager for ""
	I0918 18:55:14.371195  648496 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0918 18:55:14.371207  648496 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0918 18:55:14.371218  648496 start_flags.go:321] config:
	{Name:addons-351470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-351470 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 18:55:14.373476  648496 out.go:177] * Starting control plane node addons-351470 in cluster addons-351470
	I0918 18:55:14.375461  648496 cache.go:122] Beginning downloading kic base image for docker with crio
	I0918 18:55:14.377789  648496 out.go:177] * Pulling base image ...
	I0918 18:55:14.380037  648496 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0918 18:55:14.380097  648496 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I0918 18:55:14.380110  648496 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I0918 18:55:14.380117  648496 cache.go:57] Caching tarball of preloaded images
	I0918 18:55:14.380203  648496 preload.go:174] Found /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0918 18:55:14.380213  648496 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I0918 18:55:14.380560  648496 profile.go:148] Saving config to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/config.json ...
	I0918 18:55:14.380590  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/config.json: {Name:mk3fb0408b5d9dad7821d789b87d077f5681e779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:14.397416  648496 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I0918 18:55:14.397566  648496 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory
	I0918 18:55:14.397585  648496 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory, skipping pull
	I0918 18:55:14.397591  648496 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in cache, skipping pull
	I0918 18:55:14.397599  648496 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 as a tarball
	I0918 18:55:14.397605  648496 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 from local cache
	I0918 18:55:30.433587  648496 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 from cached tarball
	I0918 18:55:30.433625  648496 cache.go:195] Successfully downloaded all kic artifacts
	I0918 18:55:30.433678  648496 start.go:365] acquiring machines lock for addons-351470: {Name:mk8c04819510b908dbe116c0bcf21061e409e05e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 18:55:30.433800  648496 start.go:369] acquired machines lock for "addons-351470" in 98.905µs
	I0918 18:55:30.433833  648496 start.go:93] Provisioning new machine with config: &{Name:addons-351470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-351470 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 18:55:30.433910  648496 start.go:125] createHost starting for "" (driver="docker")
	I0918 18:55:30.436852  648496 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0918 18:55:30.437106  648496 start.go:159] libmachine.API.Create for "addons-351470" (driver="docker")
	I0918 18:55:30.437132  648496 client.go:168] LocalClient.Create starting
	I0918 18:55:30.437263  648496 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem
	I0918 18:55:30.816732  648496 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem
	I0918 18:55:31.295906  648496 cli_runner.go:164] Run: docker network inspect addons-351470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0918 18:55:31.317826  648496 cli_runner.go:211] docker network inspect addons-351470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0918 18:55:31.317906  648496 network_create.go:281] running [docker network inspect addons-351470] to gather additional debugging logs...
	I0918 18:55:31.317927  648496 cli_runner.go:164] Run: docker network inspect addons-351470
	W0918 18:55:31.334543  648496 cli_runner.go:211] docker network inspect addons-351470 returned with exit code 1
	I0918 18:55:31.334577  648496 network_create.go:284] error running [docker network inspect addons-351470]: docker network inspect addons-351470: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-351470 not found
	I0918 18:55:31.334589  648496 network_create.go:286] output of [docker network inspect addons-351470]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-351470 not found
	
	** /stderr **
	I0918 18:55:31.334659  648496 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 18:55:31.353482  648496 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000cbd4b0}
	I0918 18:55:31.353520  648496 network_create.go:123] attempt to create docker network addons-351470 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0918 18:55:31.353575  648496 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-351470 addons-351470
	I0918 18:55:31.429109  648496 network_create.go:107] docker network addons-351470 192.168.49.0/24 created
	I0918 18:55:31.429141  648496 kic.go:117] calculated static IP "192.168.49.2" for the "addons-351470" container
	I0918 18:55:31.429223  648496 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0918 18:55:31.445718  648496 cli_runner.go:164] Run: docker volume create addons-351470 --label name.minikube.sigs.k8s.io=addons-351470 --label created_by.minikube.sigs.k8s.io=true
	I0918 18:55:31.464588  648496 oci.go:103] Successfully created a docker volume addons-351470
	I0918 18:55:31.464680  648496 cli_runner.go:164] Run: docker run --rm --name addons-351470-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-351470 --entrypoint /usr/bin/test -v addons-351470:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I0918 18:55:33.356021  648496 cli_runner.go:217] Completed: docker run --rm --name addons-351470-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-351470 --entrypoint /usr/bin/test -v addons-351470:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib: (1.891290146s)
	I0918 18:55:33.356061  648496 oci.go:107] Successfully prepared a docker volume addons-351470
	I0918 18:55:33.356080  648496 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0918 18:55:33.356099  648496 kic.go:190] Starting extracting preloaded images to volume ...
	I0918 18:55:33.356196  648496 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-351470:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I0918 18:55:37.593894  648496 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-351470:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir: (4.237642823s)
	I0918 18:55:37.593926  648496 kic.go:199] duration metric: took 4.237824 seconds to extract preloaded images to volume
	W0918 18:55:37.594066  648496 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0918 18:55:37.594187  648496 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0918 18:55:37.653849  648496 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-351470 --name addons-351470 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-351470 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-351470 --network addons-351470 --ip 192.168.49.2 --volume addons-351470:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I0918 18:55:38.014403  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Running}}
	I0918 18:55:38.044380  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:55:38.068104  648496 cli_runner.go:164] Run: docker exec addons-351470 stat /var/lib/dpkg/alternatives/iptables
	I0918 18:55:38.136690  648496 oci.go:144] the created container "addons-351470" has a running status.
	I0918 18:55:38.136722  648496 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa...
	I0918 18:55:38.281384  648496 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0918 18:55:38.312034  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:55:38.342435  648496 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0918 18:55:38.342466  648496 kic_runner.go:114] Args: [docker exec --privileged addons-351470 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0918 18:55:38.420593  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:55:38.450551  648496 machine.go:88] provisioning docker machine ...
	I0918 18:55:38.450580  648496 ubuntu.go:169] provisioning hostname "addons-351470"
	I0918 18:55:38.450645  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:55:38.477310  648496 main.go:141] libmachine: Using SSH client type: native
	I0918 18:55:38.477735  648496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33415 <nil> <nil>}
	I0918 18:55:38.477747  648496 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-351470 && echo "addons-351470" | sudo tee /etc/hostname
	I0918 18:55:38.478334  648496 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38154->127.0.0.1:33415: read: connection reset by peer
	I0918 18:55:41.633096  648496 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-351470
	
	I0918 18:55:41.633206  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:55:41.652286  648496 main.go:141] libmachine: Using SSH client type: native
	I0918 18:55:41.652701  648496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33415 <nil> <nil>}
	I0918 18:55:41.652718  648496 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-351470' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-351470/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-351470' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 18:55:41.793034  648496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 18:55:41.793104  648496 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17263-642665/.minikube CaCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17263-642665/.minikube}
	I0918 18:55:41.793139  648496 ubuntu.go:177] setting up certificates
	I0918 18:55:41.793177  648496 provision.go:83] configureAuth start
	I0918 18:55:41.793285  648496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-351470
	I0918 18:55:41.811472  648496 provision.go:138] copyHostCerts
	I0918 18:55:41.811552  648496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem (1082 bytes)
	I0918 18:55:41.811684  648496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem (1123 bytes)
	I0918 18:55:41.811754  648496 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem (1675 bytes)
	I0918 18:55:41.811838  648496 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem org=jenkins.addons-351470 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-351470]
	I0918 18:55:42.525302  648496 provision.go:172] copyRemoteCerts
	I0918 18:55:42.525371  648496 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 18:55:42.525416  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:55:42.547235  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:55:42.646651  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 18:55:42.676674  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0918 18:55:42.707870  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 18:55:42.736786  648496 provision.go:86] duration metric: configureAuth took 943.576423ms
	I0918 18:55:42.736814  648496 ubuntu.go:193] setting minikube options for container-runtime
	I0918 18:55:42.736998  648496 config.go:182] Loaded profile config "addons-351470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0918 18:55:42.737110  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:55:42.755826  648496 main.go:141] libmachine: Using SSH client type: native
	I0918 18:55:42.756240  648496 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33415 <nil> <nil>}
	I0918 18:55:42.756271  648496 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 18:55:43.015456  648496 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 18:55:43.015482  648496 machine.go:91] provisioned docker machine in 4.564911182s
	I0918 18:55:43.015492  648496 client.go:171] LocalClient.Create took 12.578354739s
	I0918 18:55:43.015503  648496 start.go:167] duration metric: libmachine.API.Create for "addons-351470" took 12.578400089s
	I0918 18:55:43.015511  648496 start.go:300] post-start starting for "addons-351470" (driver="docker")
	I0918 18:55:43.015521  648496 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 18:55:43.015603  648496 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 18:55:43.015653  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:55:43.037893  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:55:43.139152  648496 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 18:55:43.143365  648496 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0918 18:55:43.143400  648496 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0918 18:55:43.143412  648496 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0918 18:55:43.143420  648496 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0918 18:55:43.143431  648496 filesync.go:126] Scanning /home/jenkins/minikube-integration/17263-642665/.minikube/addons for local assets ...
	I0918 18:55:43.143506  648496 filesync.go:126] Scanning /home/jenkins/minikube-integration/17263-642665/.minikube/files for local assets ...
	I0918 18:55:43.143534  648496 start.go:303] post-start completed in 128.01724ms
	I0918 18:55:43.143868  648496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-351470
	I0918 18:55:43.161156  648496 profile.go:148] Saving config to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/config.json ...
	I0918 18:55:43.161441  648496 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 18:55:43.161493  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:55:43.178880  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:55:43.277852  648496 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0918 18:55:43.283558  648496 start.go:128] duration metric: createHost completed in 12.849631537s
	I0918 18:55:43.283580  648496 start.go:83] releasing machines lock for "addons-351470", held for 12.849765002s
	I0918 18:55:43.283651  648496 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-351470
	I0918 18:55:43.301431  648496 ssh_runner.go:195] Run: cat /version.json
	I0918 18:55:43.301454  648496 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 18:55:43.301485  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:55:43.301522  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:55:43.321063  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:55:43.322478  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:55:43.416469  648496 ssh_runner.go:195] Run: systemctl --version
	I0918 18:55:43.560468  648496 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 18:55:43.710596  648496 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0918 18:55:43.716154  648496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 18:55:43.741457  648496 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0918 18:55:43.741537  648496 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 18:55:43.777622  648496 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0918 18:55:43.777643  648496 start.go:469] detecting cgroup driver to use...
	I0918 18:55:43.777676  648496 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0918 18:55:43.777727  648496 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 18:55:43.796535  648496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 18:55:43.810566  648496 docker.go:196] disabling cri-docker service (if available) ...
	I0918 18:55:43.810674  648496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 18:55:43.826976  648496 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 18:55:43.844266  648496 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 18:55:43.939181  648496 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 18:55:44.060817  648496 docker.go:212] disabling docker service ...
	I0918 18:55:44.060924  648496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 18:55:44.083832  648496 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 18:55:44.100633  648496 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 18:55:44.196546  648496 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 18:55:44.303130  648496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 18:55:44.317150  648496 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 18:55:44.337602  648496 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0918 18:55:44.337669  648496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 18:55:44.350393  648496 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 18:55:44.350466  648496 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 18:55:44.364380  648496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 18:55:44.383342  648496 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 18:55:44.395383  648496 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 18:55:44.406558  648496 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 18:55:44.417472  648496 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 18:55:44.427933  648496 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 18:55:44.515011  648496 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 18:55:44.644632  648496 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 18:55:44.644717  648496 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 18:55:44.649545  648496 start.go:537] Will wait 60s for crictl version
	I0918 18:55:44.649608  648496 ssh_runner.go:195] Run: which crictl
	I0918 18:55:44.654096  648496 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 18:55:44.704453  648496 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0918 18:55:44.704552  648496 ssh_runner.go:195] Run: crio --version
	I0918 18:55:44.747608  648496 ssh_runner.go:195] Run: crio --version
	I0918 18:55:44.792302  648496 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I0918 18:55:44.794751  648496 cli_runner.go:164] Run: docker network inspect addons-351470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 18:55:44.811633  648496 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0918 18:55:44.816299  648496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 18:55:44.830011  648496 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0918 18:55:44.830082  648496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 18:55:44.892962  648496 crio.go:496] all images are preloaded for cri-o runtime.
	I0918 18:55:44.892984  648496 crio.go:415] Images already preloaded, skipping extraction
	I0918 18:55:44.893040  648496 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 18:55:44.936953  648496 crio.go:496] all images are preloaded for cri-o runtime.
	I0918 18:55:44.936973  648496 cache_images.go:84] Images are preloaded, skipping loading
	I0918 18:55:44.937070  648496 ssh_runner.go:195] Run: crio config
	I0918 18:55:44.993662  648496 cni.go:84] Creating CNI manager for ""
	I0918 18:55:44.993683  648496 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0918 18:55:44.993720  648496 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0918 18:55:44.993744  648496 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-351470 NodeName:addons-351470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 18:55:44.993884  648496 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-351470"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 18:55:44.993953  648496 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-351470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:addons-351470 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0918 18:55:44.994019  648496 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0918 18:55:45.010680  648496 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 18:55:45.010770  648496 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 18:55:45.034235  648496 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0918 18:55:45.081585  648496 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 18:55:45.124208  648496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0918 18:55:45.157236  648496 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0918 18:55:45.163549  648496 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 18:55:45.184370  648496 certs.go:56] Setting up /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470 for IP: 192.168.49.2
	I0918 18:55:45.184435  648496 certs.go:190] acquiring lock for shared ca certs: {Name:mkb16b377708c2d983623434e9d896d9d8fd7133 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:45.184670  648496 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.key
	I0918 18:55:45.870169  648496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt ...
	I0918 18:55:45.870201  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt: {Name:mk8ce942029a0252572de9cb7b7d9efee3019b19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:45.870416  648496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17263-642665/.minikube/ca.key ...
	I0918 18:55:45.870433  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/ca.key: {Name:mk519f55d35ef0dfd7b5f58eb679af53f0fdf2ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:45.870526  648496 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.key
	I0918 18:55:48.079293  648496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.crt ...
	I0918 18:55:48.079335  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.crt: {Name:mk3dacbced543e99900eaea9b133012dae11b85b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:48.079545  648496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.key ...
	I0918 18:55:48.079554  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.key: {Name:mkc064c1a51f99c9b98de1d53513177dda997c24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:48.079690  648496 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.key
	I0918 18:55:48.079734  648496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt with IP's: []
	I0918 18:55:48.450727  648496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt ...
	I0918 18:55:48.450764  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: {Name:mk9cf70eae8ff62c50839a2cd2c9a29cbe4330ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:48.450965  648496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.key ...
	I0918 18:55:48.450981  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.key: {Name:mk9d001b63a8a7ce465d82d0b39908eac9c7eec9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:48.451600  648496 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.key.dd3b5fb2
	I0918 18:55:48.451631  648496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0918 18:55:48.730498  648496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.crt.dd3b5fb2 ...
	I0918 18:55:48.730534  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.crt.dd3b5fb2: {Name:mk6e6762897d4c7e3e3cde69c2e29c2bec36ef38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:48.731200  648496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.key.dd3b5fb2 ...
	I0918 18:55:48.731225  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.key.dd3b5fb2: {Name:mk2bc6742a825845966f3c6be3f59c519d0c0961 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:48.731312  648496 certs.go:337] copying /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.crt
	I0918 18:55:48.731381  648496 certs.go:341] copying /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.key
	I0918 18:55:48.731434  648496 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/proxy-client.key
	I0918 18:55:48.731453  648496 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/proxy-client.crt with IP's: []
	I0918 18:55:50.129170  648496 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/proxy-client.crt ...
	I0918 18:55:50.129208  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/proxy-client.crt: {Name:mkfae3f3218f2f6445507927280b4e94eeda031a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:50.129955  648496 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/proxy-client.key ...
	I0918 18:55:50.129974  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/proxy-client.key: {Name:mk381624f0d7b6e5a5f6676b7678903363d91ae2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:55:50.130186  648496 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 18:55:50.130235  648496 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem (1082 bytes)
	I0918 18:55:50.130271  648496 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem (1123 bytes)
	I0918 18:55:50.130302  648496 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem (1675 bytes)
	I0918 18:55:50.130998  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0918 18:55:50.163526  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0918 18:55:50.196531  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 18:55:50.226293  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 18:55:50.256316  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 18:55:50.285901  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 18:55:50.315637  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 18:55:50.344082  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0918 18:55:50.372471  648496 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 18:55:50.400936  648496 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 18:55:50.422129  648496 ssh_runner.go:195] Run: openssl version
	I0918 18:55:50.429557  648496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 18:55:50.441465  648496 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 18:55:50.446311  648496 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 18 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I0918 18:55:50.446389  648496 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 18:55:50.455229  648496 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 18:55:50.467050  648496 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0918 18:55:50.471653  648496 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0918 18:55:50.471752  648496 kubeadm.go:404] StartCluster: {Name:addons-351470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:addons-351470 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 18:55:50.471918  648496 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 18:55:50.471981  648496 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 18:55:50.514883  648496 cri.go:89] found id: ""
	I0918 18:55:50.514956  648496 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 18:55:50.525580  648496 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 18:55:50.536571  648496 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0918 18:55:50.536688  648496 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 18:55:50.547610  648496 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 18:55:50.547649  648496 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0918 18:55:50.601042  648496 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0918 18:55:50.601287  648496 kubeadm.go:322] [preflight] Running pre-flight checks
	I0918 18:55:50.646727  648496 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0918 18:55:50.646839  648496 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1044-aws
	I0918 18:55:50.646898  648496 kubeadm.go:322] OS: Linux
	I0918 18:55:50.646970  648496 kubeadm.go:322] CGROUPS_CPU: enabled
	I0918 18:55:50.647042  648496 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0918 18:55:50.647104  648496 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0918 18:55:50.647174  648496 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0918 18:55:50.647234  648496 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0918 18:55:50.647305  648496 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0918 18:55:50.647364  648496 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0918 18:55:50.647431  648496 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0918 18:55:50.647550  648496 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0918 18:55:50.735171  648496 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 18:55:50.735330  648496 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 18:55:50.735462  648496 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 18:55:50.996777  648496 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 18:55:50.999954  648496 out.go:204]   - Generating certificates and keys ...
	I0918 18:55:51.000099  648496 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0918 18:55:51.000161  648496 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0918 18:55:51.354564  648496 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 18:55:52.016551  648496 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0918 18:55:52.403400  648496 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0918 18:55:53.058358  648496 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0918 18:55:53.641305  648496 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0918 18:55:53.641825  648496 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-351470 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0918 18:55:54.290652  648496 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0918 18:55:54.291176  648496 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-351470 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0918 18:55:54.603528  648496 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 18:55:54.836218  648496 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 18:55:55.276696  648496 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0918 18:55:55.277093  648496 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 18:55:55.713068  648496 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 18:55:56.271080  648496 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 18:55:56.591944  648496 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 18:55:56.979931  648496 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 18:55:56.980519  648496 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 18:55:56.983114  648496 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 18:55:56.986710  648496 out.go:204]   - Booting up control plane ...
	I0918 18:55:56.986863  648496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 18:55:56.986941  648496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 18:55:56.987586  648496 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 18:55:56.998638  648496 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 18:55:56.999626  648496 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 18:55:56.999823  648496 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0918 18:55:57.103960  648496 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 18:56:05.107293  648496 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003397 seconds
	I0918 18:56:05.107414  648496 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 18:56:05.124454  648496 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 18:56:05.651036  648496 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 18:56:05.651224  648496 kubeadm.go:322] [mark-control-plane] Marking the node addons-351470 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 18:56:06.163427  648496 kubeadm.go:322] [bootstrap-token] Using token: z2ghwa.ius3vvohde9l6hlk
	I0918 18:56:06.165794  648496 out.go:204]   - Configuring RBAC rules ...
	I0918 18:56:06.165923  648496 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 18:56:06.172969  648496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 18:56:06.181749  648496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 18:56:06.187891  648496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 18:56:06.192282  648496 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 18:56:06.197809  648496 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 18:56:06.216027  648496 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 18:56:06.475987  648496 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0918 18:56:06.612146  648496 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0918 18:56:06.612163  648496 kubeadm.go:322] 
	I0918 18:56:06.612220  648496 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0918 18:56:06.612225  648496 kubeadm.go:322] 
	I0918 18:56:06.612297  648496 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0918 18:56:06.612302  648496 kubeadm.go:322] 
	I0918 18:56:06.612325  648496 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0918 18:56:06.612387  648496 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 18:56:06.612434  648496 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 18:56:06.612439  648496 kubeadm.go:322] 
	I0918 18:56:06.612489  648496 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0918 18:56:06.612494  648496 kubeadm.go:322] 
	I0918 18:56:06.612539  648496 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 18:56:06.612544  648496 kubeadm.go:322] 
	I0918 18:56:06.612593  648496 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0918 18:56:06.612663  648496 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 18:56:06.612727  648496 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 18:56:06.612731  648496 kubeadm.go:322] 
	I0918 18:56:06.612810  648496 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 18:56:06.612882  648496 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0918 18:56:06.612886  648496 kubeadm.go:322] 
	I0918 18:56:06.612965  648496 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token z2ghwa.ius3vvohde9l6hlk \
	I0918 18:56:06.613061  648496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1471e1bb7c66f1f1f8363746a1e5f2ae35a8554d6ad2342a0b3973b70608e7c8 \
	I0918 18:56:06.613081  648496 kubeadm.go:322] 	--control-plane 
	I0918 18:56:06.613086  648496 kubeadm.go:322] 
	I0918 18:56:06.613165  648496 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0918 18:56:06.613171  648496 kubeadm.go:322] 
	I0918 18:56:06.613247  648496 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token z2ghwa.ius3vvohde9l6hlk \
	I0918 18:56:06.613343  648496 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1471e1bb7c66f1f1f8363746a1e5f2ae35a8554d6ad2342a0b3973b70608e7c8 
	I0918 18:56:06.615591  648496 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-aws\n", err: exit status 1
	I0918 18:56:06.615710  648496 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 18:56:06.615877  648496 cni.go:84] Creating CNI manager for ""
	I0918 18:56:06.615890  648496 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0918 18:56:06.618406  648496 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0918 18:56:06.620660  648496 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0918 18:56:06.629983  648496 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I0918 18:56:06.630001  648496 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0918 18:56:06.675116  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0918 18:56:07.589794  648496 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 18:56:07.589917  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:07.590002  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36 minikube.k8s.io/name=addons-351470 minikube.k8s.io/updated_at=2023_09_18T18_56_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:07.739098  648496 ops.go:34] apiserver oom_adj: -16
	I0918 18:56:07.739203  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:07.845481  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:08.456785  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:08.957142  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:09.457069  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:09.956977  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:10.456743  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:10.956760  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:11.456252  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:11.956392  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:12.456261  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:12.956601  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:13.457114  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:13.957003  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:14.456801  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:14.956705  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:15.456741  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:15.956290  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:16.456732  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:16.956968  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:17.456256  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:17.956671  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:18.456715  648496 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 18:56:18.552997  648496 kubeadm.go:1081] duration metric: took 10.96312205s to wait for elevateKubeSystemPrivileges.
	I0918 18:56:18.553024  648496 kubeadm.go:406] StartCluster complete in 28.081276711s
	I0918 18:56:18.553041  648496 settings.go:142] acquiring lock: {Name:mk1cee0139b5f0ae29a168e7793f3f69abc95f11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:56:18.553162  648496 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 18:56:18.553549  648496 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/kubeconfig: {Name:mkbc55d6d811840d4d5667f8f39c79585e0314ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 18:56:18.554276  648496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 18:56:18.554564  648496 config.go:182] Loaded profile config "addons-351470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0918 18:56:18.554674  648496 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0918 18:56:18.554761  648496 addons.go:69] Setting volumesnapshots=true in profile "addons-351470"
	I0918 18:56:18.554777  648496 addons.go:231] Setting addon volumesnapshots=true in "addons-351470"
	I0918 18:56:18.554816  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.555271  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.555751  648496 addons.go:69] Setting cloud-spanner=true in profile "addons-351470"
	I0918 18:56:18.555770  648496 addons.go:231] Setting addon cloud-spanner=true in "addons-351470"
	I0918 18:56:18.555824  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.556203  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.556731  648496 addons.go:69] Setting inspektor-gadget=true in profile "addons-351470"
	I0918 18:56:18.556757  648496 addons.go:231] Setting addon inspektor-gadget=true in "addons-351470"
	I0918 18:56:18.556789  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.557186  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.557503  648496 addons.go:69] Setting metrics-server=true in profile "addons-351470"
	I0918 18:56:18.557524  648496 addons.go:231] Setting addon metrics-server=true in "addons-351470"
	I0918 18:56:18.557562  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.557933  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.564013  648496 addons.go:69] Setting registry=true in profile "addons-351470"
	I0918 18:56:18.564046  648496 addons.go:231] Setting addon registry=true in "addons-351470"
	I0918 18:56:18.564092  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.564522  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.567372  648496 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-351470"
	I0918 18:56:18.567517  648496 addons.go:69] Setting storage-provisioner=true in profile "addons-351470"
	I0918 18:56:18.567547  648496 addons.go:231] Setting addon storage-provisioner=true in "addons-351470"
	I0918 18:56:18.567599  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.572257  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.572456  648496 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-351470"
	I0918 18:56:18.572616  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.572759  648496 addons.go:69] Setting default-storageclass=true in profile "addons-351470"
	I0918 18:56:18.572785  648496 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-351470"
	I0918 18:56:18.573062  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.586841  648496 addons.go:69] Setting gcp-auth=true in profile "addons-351470"
	I0918 18:56:18.586927  648496 mustload.go:65] Loading cluster: addons-351470
	I0918 18:56:18.587126  648496 config.go:182] Loaded profile config "addons-351470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0918 18:56:18.587378  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.609655  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.610530  648496 addons.go:69] Setting ingress=true in profile "addons-351470"
	I0918 18:56:18.610565  648496 addons.go:231] Setting addon ingress=true in "addons-351470"
	I0918 18:56:18.623072  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.623558  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.620452  648496 addons.go:69] Setting ingress-dns=true in profile "addons-351470"
	I0918 18:56:18.695926  648496 addons.go:231] Setting addon ingress-dns=true in "addons-351470"
	I0918 18:56:18.696008  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.696484  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.714467  648496 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I0918 18:56:18.727522  648496 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0918 18:56:18.729540  648496 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0918 18:56:18.729561  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0918 18:56:18.729626  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:18.727857  648496 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0918 18:56:18.729875  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0918 18:56:18.729948  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:18.727864  648496 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0918 18:56:18.735038  648496 out.go:177]   - Using image docker.io/registry:2.8.1
	I0918 18:56:18.745610  648496 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0918 18:56:18.744375  648496 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0918 18:56:18.751863  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0918 18:56:18.751952  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:18.755350  648496 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0918 18:56:18.757469  648496 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0918 18:56:18.757491  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0918 18:56:18.757578  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:18.758327  648496 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0918 18:56:18.758347  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0918 18:56:18.758400  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:18.819201  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.844223  648496 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 18:56:18.854803  648496 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 18:56:18.854825  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 18:56:18.854882  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:18.853596  648496 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 18:56:18.853700  648496 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-351470" context rescaled to 1 replicas
	I0918 18:56:18.866436  648496 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 18:56:18.869947  648496 addons.go:231] Setting addon default-storageclass=true in "addons-351470"
	I0918 18:56:18.874208  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:18.874120  648496 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0918 18:56:18.876298  648496 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0918 18:56:18.874131  648496 out.go:177] * Verifying Kubernetes components...
	I0918 18:56:18.874136  648496 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0918 18:56:18.874140  648496 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0918 18:56:18.874969  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:18.888082  648496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 18:56:18.891906  648496 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0918 18:56:18.895989  648496 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0918 18:56:18.896012  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0918 18:56:18.896081  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:18.919884  648496 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0918 18:56:18.920941  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:18.928980  648496 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0918 18:56:18.926631  648496 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0918 18:56:18.939660  648496 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.2
	I0918 18:56:18.948157  648496 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0918 18:56:18.955294  648496 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0918 18:56:18.948495  648496 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0918 18:56:18.955700  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:18.964137  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0918 18:56:18.964266  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:18.968033  648496 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0918 18:56:18.965985  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:18.966731  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:18.976771  648496 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0918 18:56:18.976790  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0918 18:56:18.976861  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:19.014359  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:19.015930  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:19.063230  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:19.065213  648496 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 18:56:19.065234  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 18:56:19.065297  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:19.094836  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:19.115861  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:19.139857  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:19.320691  648496 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0918 18:56:19.320759  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0918 18:56:19.378000  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0918 18:56:19.399831  648496 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0918 18:56:19.399902  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0918 18:56:19.438372  648496 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0918 18:56:19.438442  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0918 18:56:19.498565  648496 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0918 18:56:19.498641  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0918 18:56:19.507933  648496 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0918 18:56:19.508007  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0918 18:56:19.524162  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 18:56:19.546516  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0918 18:56:19.551992  648496 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0918 18:56:19.552053  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0918 18:56:19.555463  648496 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0918 18:56:19.555522  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0918 18:56:19.567461  648496 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0918 18:56:19.567532  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0918 18:56:19.572281  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0918 18:56:19.644669  648496 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0918 18:56:19.644695  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0918 18:56:19.647124  648496 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0918 18:56:19.647145  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0918 18:56:19.662292  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 18:56:19.679570  648496 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0918 18:56:19.679598  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0918 18:56:19.692185  648496 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0918 18:56:19.692214  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0918 18:56:19.695057  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0918 18:56:19.798045  648496 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 18:56:19.798072  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0918 18:56:19.800992  648496 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0918 18:56:19.801018  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0918 18:56:19.853732  648496 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0918 18:56:19.853767  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0918 18:56:19.860614  648496 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0918 18:56:19.860648  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0918 18:56:19.976412  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0918 18:56:20.008962  648496 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 18:56:20.008989  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0918 18:56:20.023569  648496 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0918 18:56:20.023606  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0918 18:56:20.068831  648496 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0918 18:56:20.068862  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0918 18:56:20.149023  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 18:56:20.207594  648496 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0918 18:56:20.207623  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0918 18:56:20.245366  648496 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0918 18:56:20.245393  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0918 18:56:20.332325  648496 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0918 18:56:20.332357  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0918 18:56:20.345260  648496 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 18:56:20.345287  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0918 18:56:20.456129  648496 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0918 18:56:20.456162  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0918 18:56:20.463764  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0918 18:56:20.565478  648496 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0918 18:56:20.565503  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0918 18:56:20.666263  648496 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0918 18:56:20.666287  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0918 18:56:20.826439  648496 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 18:56:20.826473  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0918 18:56:21.005937  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0918 18:56:21.235946  648496 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.34777392s)
	I0918 18:56:21.236073  648496 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.380866979s)
	I0918 18:56:21.236092  648496 start.go:917] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0918 18:56:21.236930  648496 node_ready.go:35] waiting up to 6m0s for node "addons-351470" to be "Ready" ...
	I0918 18:56:22.857372  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.47929133s)
	I0918 18:56:23.326748  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:23.685051  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.160805428s)
	I0918 18:56:23.685132  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.138549779s)
	I0918 18:56:24.342979  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.680657509s)
	I0918 18:56:24.343049  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.647963508s)
	I0918 18:56:24.343073  648496 addons.go:467] Verifying addon registry=true in "addons-351470"
	I0918 18:56:24.343102  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.770581646s)
	I0918 18:56:24.343116  648496 addons.go:467] Verifying addon ingress=true in "addons-351470"
	I0918 18:56:24.346311  648496 out.go:177] * Verifying ingress addon...
	I0918 18:56:24.343418  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.194354268s)
	I0918 18:56:24.343465  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.879646063s)
	I0918 18:56:24.343614  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.366891774s)
	I0918 18:56:24.348894  648496 out.go:177] * Verifying registry addon...
	W0918 18:56:24.348972  648496 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 18:56:24.349085  648496 addons.go:467] Verifying addon metrics-server=true in "addons-351470"
	I0918 18:56:24.352011  648496 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0918 18:56:24.356141  648496 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0918 18:56:24.352449  648496 retry.go:31] will retry after 131.401325ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0918 18:56:24.365238  648496 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0918 18:56:24.365269  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:24.370785  648496 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0918 18:56:24.370809  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:24.411919  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:24.428203  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:24.488695  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0918 18:56:24.778059  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.772062289s)
	I0918 18:56:24.778105  648496 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-351470"
	I0918 18:56:24.780408  648496 out.go:177] * Verifying csi-hostpath-driver addon...
	I0918 18:56:24.784325  648496 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0918 18:56:24.793994  648496 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0918 18:56:24.794041  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:24.799966  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:24.917059  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:24.940532  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:25.306559  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:25.421789  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:25.436265  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:25.748599  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:25.804802  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:25.923763  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:25.947261  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:26.152067  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.663321748s)
	I0918 18:56:26.312652  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:26.417316  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:26.433263  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:26.808752  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:26.895728  648496 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0918 18:56:26.895825  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:26.916466  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:26.933670  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:26.940339  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:27.149234  648496 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0918 18:56:27.226398  648496 addons.go:231] Setting addon gcp-auth=true in "addons-351470"
	I0918 18:56:27.226456  648496 host.go:66] Checking if "addons-351470" exists ...
	I0918 18:56:27.226960  648496 cli_runner.go:164] Run: docker container inspect addons-351470 --format={{.State.Status}}
	I0918 18:56:27.267282  648496 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0918 18:56:27.267333  648496 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-351470
	I0918 18:56:27.310702  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:27.317109  648496 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33415 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/addons-351470/id_rsa Username:docker}
	I0918 18:56:27.417509  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:27.433203  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:27.478032  648496 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0918 18:56:27.480683  648496 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0918 18:56:27.483116  648496 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0918 18:56:27.483142  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0918 18:56:27.541254  648496 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0918 18:56:27.541287  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0918 18:56:27.602288  648496 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 18:56:27.602313  648496 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0918 18:56:27.663212  648496 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0918 18:56:27.807752  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:27.922509  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:27.938098  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:28.237584  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:28.317366  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:28.418172  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:28.433137  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:28.780726  648496 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.11747561s)
	I0918 18:56:28.782478  648496 addons.go:467] Verifying addon gcp-auth=true in "addons-351470"
	I0918 18:56:28.785793  648496 out.go:177] * Verifying gcp-auth addon...
	I0918 18:56:28.789067  648496 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0918 18:56:28.840500  648496 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0918 18:56:28.840565  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:28.844673  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:28.868800  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:28.918129  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:28.933255  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:29.312627  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:29.373579  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:29.417252  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:29.432413  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:29.806124  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:29.874215  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:29.917500  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:29.932940  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:30.312644  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:30.373711  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:30.417450  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:30.433233  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:30.736120  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:30.805033  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:30.872929  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:30.916959  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:30.933279  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:31.311827  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:31.372820  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:31.416588  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:31.433309  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:31.805408  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:31.873257  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:31.917532  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:31.932837  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:32.308015  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:32.374178  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:32.417102  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:32.433852  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:32.736555  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:32.805378  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:32.873364  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:32.916905  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:32.933373  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:33.321493  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:33.373174  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:33.420643  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:33.433304  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:33.816093  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:33.893341  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:33.916665  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:33.934287  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:34.309789  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:34.373178  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:34.417000  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:34.433519  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:34.805056  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:34.882814  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:34.916589  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:34.932901  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:35.237249  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:35.313370  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:35.373643  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:35.417191  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:35.432702  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:35.805714  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:35.874178  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:35.919531  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:35.933044  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:36.305491  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:36.372877  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:36.422970  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:36.432807  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:36.805573  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:36.873305  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:36.916679  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:36.932478  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:37.304947  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:37.373101  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:37.416478  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:37.432591  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:37.735462  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:37.805100  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:37.873244  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:37.916859  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:37.933021  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:38.304204  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:38.372840  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:38.417011  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:38.433049  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:38.804679  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:38.872614  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:38.916529  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:38.932816  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:39.305561  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:39.373423  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:39.416597  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:39.432684  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:39.736392  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:39.805131  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:39.873481  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:39.916262  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:39.932376  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:40.307310  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:40.373099  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:40.416981  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:40.432946  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:40.804282  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:40.872977  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:40.917563  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:40.932829  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:41.304995  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:41.376265  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:41.416500  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:41.432503  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:41.804607  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:41.873078  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:41.916236  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:41.933076  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:42.236651  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:42.305320  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:42.373379  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:42.416830  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:42.433169  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:42.804264  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:42.873099  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:42.916657  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:42.932764  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:43.305276  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:43.372793  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:43.416919  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:43.433039  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:43.805323  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:43.873238  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:43.916014  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:43.933098  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:44.305197  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:44.373252  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:44.416808  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:44.433093  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:44.736347  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:44.804820  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:44.872372  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:44.916577  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:44.932868  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:45.309364  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:45.376029  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:45.417652  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:45.434726  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:45.805244  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:45.873242  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:45.918211  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:45.932209  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:46.305538  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:46.373371  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:46.416664  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:46.432703  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:46.804848  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:46.872816  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:46.916077  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:46.933090  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:47.236280  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:47.304629  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:47.373217  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:47.416916  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:47.433087  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:47.804448  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:47.873623  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:47.916646  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:47.932878  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:48.306215  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:48.372718  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:48.416381  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:48.432432  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:48.804701  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:48.873345  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:48.916470  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:48.932535  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:49.236949  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:49.305438  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:49.373046  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:49.416079  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:49.432887  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:49.805383  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:49.872559  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:49.916372  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:49.932566  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:50.305898  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:50.372775  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:50.416632  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:50.432592  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:50.805095  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:50.873285  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:50.916625  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:50.932582  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:51.304889  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:51.372851  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:51.416355  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:51.432841  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:51.736929  648496 node_ready.go:58] node "addons-351470" has status "Ready":"False"
	I0918 18:56:51.804795  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:51.873260  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:51.916867  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:51.933005  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:52.305489  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:52.372630  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:52.417155  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:52.432259  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:52.762208  648496 node_ready.go:49] node "addons-351470" has status "Ready":"True"
	I0918 18:56:52.762234  648496 node_ready.go:38] duration metric: took 31.525262638s waiting for node "addons-351470" to be "Ready" ...
	I0918 18:56:52.762250  648496 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 18:56:52.781332  648496 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-hfcps" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:52.825250  648496 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0918 18:56:52.825280  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:52.878340  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:52.927458  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:52.955009  648496 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0918 18:56:52.955036  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:53.358995  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:53.398473  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:53.419236  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:53.491150  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:53.806181  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:53.873265  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:53.916547  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:53.933002  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:54.310688  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:54.353704  648496 pod_ready.go:92] pod "coredns-5dd5756b68-hfcps" in "kube-system" namespace has status "Ready":"True"
	I0918 18:56:54.353728  648496 pod_ready.go:81] duration metric: took 1.572361024s waiting for pod "coredns-5dd5756b68-hfcps" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:54.353754  648496 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-351470" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:54.359315  648496 pod_ready.go:92] pod "etcd-addons-351470" in "kube-system" namespace has status "Ready":"True"
	I0918 18:56:54.359345  648496 pod_ready.go:81] duration metric: took 5.579601ms waiting for pod "etcd-addons-351470" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:54.359360  648496 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-351470" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:54.367447  648496 pod_ready.go:92] pod "kube-apiserver-addons-351470" in "kube-system" namespace has status "Ready":"True"
	I0918 18:56:54.367472  648496 pod_ready.go:81] duration metric: took 8.104062ms waiting for pod "kube-apiserver-addons-351470" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:54.367484  648496 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-351470" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:54.373856  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:54.374258  648496 pod_ready.go:92] pod "kube-controller-manager-addons-351470" in "kube-system" namespace has status "Ready":"True"
	I0918 18:56:54.374276  648496 pod_ready.go:81] duration metric: took 6.784489ms waiting for pod "kube-controller-manager-addons-351470" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:54.374290  648496 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f7vqg" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:54.416291  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:54.434255  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:54.737468  648496 pod_ready.go:92] pod "kube-proxy-f7vqg" in "kube-system" namespace has status "Ready":"True"
	I0918 18:56:54.737556  648496 pod_ready.go:81] duration metric: took 363.256598ms waiting for pod "kube-proxy-f7vqg" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:54.737593  648496 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-351470" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:54.806063  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:54.873153  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:54.917383  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:54.933228  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:55.138312  648496 pod_ready.go:92] pod "kube-scheduler-addons-351470" in "kube-system" namespace has status "Ready":"True"
	I0918 18:56:55.138342  648496 pod_ready.go:81] duration metric: took 400.724007ms waiting for pod "kube-scheduler-addons-351470" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:55.138376  648496 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace to be "Ready" ...
	I0918 18:56:55.310785  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:55.372650  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:55.416597  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:55.438038  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:55.805788  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:55.872695  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:55.917333  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:55.933510  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:56.306719  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:56.372811  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:56.417292  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:56.432728  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:56.806396  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:56.873015  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:56.924773  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:56.939987  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:57.307659  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:57.372450  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:57.416727  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:57.434254  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:57.443436  648496 pod_ready.go:102] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"False"
	I0918 18:56:57.807954  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:57.872953  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:57.917125  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:57.935872  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:58.307679  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:58.373698  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:58.417273  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:58.433109  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:58.806650  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:58.873593  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:58.917570  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:58.944423  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:59.310532  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:59.373621  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:59.416965  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:59.433707  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:56:59.443734  648496 pod_ready.go:102] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"False"
	I0918 18:56:59.806457  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:56:59.873159  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:56:59.917078  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:56:59.933507  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:00.309788  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:00.376613  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:00.417696  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:00.433903  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:00.807704  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:00.873760  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:00.917464  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:00.933572  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:01.321797  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:01.372983  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:01.417232  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:01.433565  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:01.444374  648496 pod_ready.go:102] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"False"
	I0918 18:57:01.807336  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:01.872873  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:01.916201  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:01.932560  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:02.306673  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:02.373712  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:02.416988  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:02.434568  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:02.806200  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:02.874209  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:02.918434  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:02.933603  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:03.316803  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:03.374168  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:03.417480  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:03.435390  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:03.453459  648496 pod_ready.go:102] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"False"
	I0918 18:57:03.807190  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:03.874858  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:03.917146  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:03.933611  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:04.309007  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:04.373589  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:04.428247  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:04.434673  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:04.808721  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:04.873735  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:04.924666  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:04.934117  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:05.306852  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:05.373090  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:05.416886  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:05.434087  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:05.810737  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:05.873204  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:05.917655  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:05.933468  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:05.945891  648496 pod_ready.go:102] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"False"
	I0918 18:57:06.309849  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:06.372695  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:06.450145  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:06.471631  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:06.808598  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:06.873844  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:06.924066  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:06.940323  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:07.310271  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:07.374013  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:07.418386  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:07.433746  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:07.806306  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:07.873563  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:07.917658  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:07.935118  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:07.949262  648496 pod_ready.go:102] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"False"
	I0918 18:57:08.305953  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:08.373459  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:08.417509  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:08.437500  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:08.806183  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:08.872225  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:08.917444  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:08.933647  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:09.321097  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:09.375010  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:09.421695  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:09.436635  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:09.808263  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:09.873123  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:09.917348  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:09.934160  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:09.954266  648496 pod_ready.go:102] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"False"
	I0918 18:57:10.311280  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:10.373510  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:10.420784  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:10.481602  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:10.806469  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:10.874071  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:10.917563  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:10.935878  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:11.308805  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:11.373605  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:11.417529  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:11.434106  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:11.807832  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:11.876742  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:11.917777  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:11.944716  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:12.311020  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:12.381506  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:12.441795  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:12.444543  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:12.463916  648496 pod_ready.go:102] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"False"
	I0918 18:57:12.806808  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:12.873086  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:12.917402  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:12.933488  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:13.305861  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:13.373254  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:13.416629  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:13.439170  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:13.806292  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:13.877358  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:13.917547  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:13.935185  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:14.306178  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:14.372728  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:14.417853  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:14.433543  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:14.806295  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:14.872675  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:14.918907  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:14.933748  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:14.947006  648496 pod_ready.go:102] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"False"
	I0918 18:57:15.307096  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:15.373510  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:15.420518  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:15.435554  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:15.807417  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:15.874654  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:15.918704  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:15.934103  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:16.321895  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:16.375005  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:16.418710  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:16.436638  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:16.808519  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:16.873436  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:16.918831  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:16.936687  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:16.953164  648496 pod_ready.go:102] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"False"
	I0918 18:57:17.307558  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:17.373518  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:17.417058  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:17.436200  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:17.806027  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:17.875469  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:17.916837  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:17.935035  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:17.944366  648496 pod_ready.go:92] pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace has status "Ready":"True"
	I0918 18:57:17.944437  648496 pod_ready.go:81] duration metric: took 22.806050374s waiting for pod "metrics-server-7c66d45ddc-z9mjl" in "kube-system" namespace to be "Ready" ...
	I0918 18:57:17.944473  648496 pod_ready.go:38] duration metric: took 25.182210097s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 18:57:17.944516  648496 api_server.go:52] waiting for apiserver process to appear ...
	I0918 18:57:17.944606  648496 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 18:57:17.970173  648496 api_server.go:72] duration metric: took 59.10365399s to wait for apiserver process to appear ...
	I0918 18:57:17.970252  648496 api_server.go:88] waiting for apiserver healthz status ...
	I0918 18:57:17.970299  648496 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0918 18:57:17.980621  648496 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0918 18:57:17.982021  648496 api_server.go:141] control plane version: v1.28.2
	I0918 18:57:17.982047  648496 api_server.go:131] duration metric: took 11.761001ms to wait for apiserver health ...
	I0918 18:57:17.982057  648496 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 18:57:18.002736  648496 system_pods.go:59] 17 kube-system pods found
	I0918 18:57:18.002827  648496 system_pods.go:61] "coredns-5dd5756b68-hfcps" [60a3199b-71b3-4769-b9fd-2e8f4a3063b5] Running
	I0918 18:57:18.002849  648496 system_pods.go:61] "csi-hostpath-attacher-0" [72054a84-2928-4def-a4e3-90aa1e60bcb0] Running
	I0918 18:57:18.002870  648496 system_pods.go:61] "csi-hostpath-resizer-0" [3c473d2f-2f48-4ba5-a9ef-2213775d2843] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 18:57:18.002911  648496 system_pods.go:61] "csi-hostpathplugin-cknjm" [2a99e83a-561e-4f98-92f0-b213f2657cdb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 18:57:18.002935  648496 system_pods.go:61] "etcd-addons-351470" [67afaa28-c8c1-4f45-97c1-b7c5805b1591] Running
	I0918 18:57:18.002966  648496 system_pods.go:61] "kindnet-ndjjv" [70d06c5a-515c-44f9-8911-6a675242a745] Running
	I0918 18:57:18.002984  648496 system_pods.go:61] "kube-apiserver-addons-351470" [042d8e5f-afca-459b-b1d8-808d33ab8130] Running
	I0918 18:57:18.003011  648496 system_pods.go:61] "kube-controller-manager-addons-351470" [8586a2db-55e9-40f5-877d-a0028183b2b3] Running
	I0918 18:57:18.003040  648496 system_pods.go:61] "kube-ingress-dns-minikube" [38dc4cd8-2f37-4e52-a13c-99f3ec43b6b1] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0918 18:57:18.003059  648496 system_pods.go:61] "kube-proxy-f7vqg" [b3be5896-8575-4aa0-b619-366a58271688] Running
	I0918 18:57:18.003080  648496 system_pods.go:61] "kube-scheduler-addons-351470" [5e985ca7-101f-41ea-a182-d1428c8b509f] Running
	I0918 18:57:18.003099  648496 system_pods.go:61] "metrics-server-7c66d45ddc-z9mjl" [5d85482f-b583-40c4-b7e9-0174b3dedab1] Running
	I0918 18:57:18.003132  648496 system_pods.go:61] "registry-9gb28" [527d0996-363b-4641-aba2-49d6b29da00c] Running
	I0918 18:57:18.003155  648496 system_pods.go:61] "registry-proxy-gzc8v" [b1fe082f-9b6f-41d3-964b-615c0229250d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0918 18:57:18.003175  648496 system_pods.go:61] "snapshot-controller-58dbcc7b99-9bh9g" [fbff0e9a-2906-475a-9447-fa87bc4a5c7a] Running
	I0918 18:57:18.003208  648496 system_pods.go:61] "snapshot-controller-58dbcc7b99-wzvtx" [3310d460-50dc-4e21-b422-128850e43a41] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 18:57:18.003230  648496 system_pods.go:61] "storage-provisioner" [6ef0b042-ff4f-49b7-aa27-350439b42e37] Running
	I0918 18:57:18.003292  648496 system_pods.go:74] duration metric: took 21.228148ms to wait for pod list to return data ...
	I0918 18:57:18.003328  648496 default_sa.go:34] waiting for default service account to be created ...
	I0918 18:57:18.015873  648496 default_sa.go:45] found service account: "default"
	I0918 18:57:18.015971  648496 default_sa.go:55] duration metric: took 12.624128ms for default service account to be created ...
	I0918 18:57:18.015998  648496 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 18:57:18.034089  648496 system_pods.go:86] 17 kube-system pods found
	I0918 18:57:18.034181  648496 system_pods.go:89] "coredns-5dd5756b68-hfcps" [60a3199b-71b3-4769-b9fd-2e8f4a3063b5] Running
	I0918 18:57:18.034204  648496 system_pods.go:89] "csi-hostpath-attacher-0" [72054a84-2928-4def-a4e3-90aa1e60bcb0] Running
	I0918 18:57:18.034227  648496 system_pods.go:89] "csi-hostpath-resizer-0" [3c473d2f-2f48-4ba5-a9ef-2213775d2843] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0918 18:57:18.034281  648496 system_pods.go:89] "csi-hostpathplugin-cknjm" [2a99e83a-561e-4f98-92f0-b213f2657cdb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0918 18:57:18.034315  648496 system_pods.go:89] "etcd-addons-351470" [67afaa28-c8c1-4f45-97c1-b7c5805b1591] Running
	I0918 18:57:18.034341  648496 system_pods.go:89] "kindnet-ndjjv" [70d06c5a-515c-44f9-8911-6a675242a745] Running
	I0918 18:57:18.034361  648496 system_pods.go:89] "kube-apiserver-addons-351470" [042d8e5f-afca-459b-b1d8-808d33ab8130] Running
	I0918 18:57:18.034399  648496 system_pods.go:89] "kube-controller-manager-addons-351470" [8586a2db-55e9-40f5-877d-a0028183b2b3] Running
	I0918 18:57:18.034431  648496 system_pods.go:89] "kube-ingress-dns-minikube" [38dc4cd8-2f37-4e52-a13c-99f3ec43b6b1] Running / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0918 18:57:18.034453  648496 system_pods.go:89] "kube-proxy-f7vqg" [b3be5896-8575-4aa0-b619-366a58271688] Running
	I0918 18:57:18.034473  648496 system_pods.go:89] "kube-scheduler-addons-351470" [5e985ca7-101f-41ea-a182-d1428c8b509f] Running
	I0918 18:57:18.034505  648496 system_pods.go:89] "metrics-server-7c66d45ddc-z9mjl" [5d85482f-b583-40c4-b7e9-0174b3dedab1] Running
	I0918 18:57:18.034531  648496 system_pods.go:89] "registry-9gb28" [527d0996-363b-4641-aba2-49d6b29da00c] Running
	I0918 18:57:18.034553  648496 system_pods.go:89] "registry-proxy-gzc8v" [b1fe082f-9b6f-41d3-964b-615c0229250d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0918 18:57:18.034574  648496 system_pods.go:89] "snapshot-controller-58dbcc7b99-9bh9g" [fbff0e9a-2906-475a-9447-fa87bc4a5c7a] Running
	I0918 18:57:18.034607  648496 system_pods.go:89] "snapshot-controller-58dbcc7b99-wzvtx" [3310d460-50dc-4e21-b422-128850e43a41] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0918 18:57:18.034637  648496 system_pods.go:89] "storage-provisioner" [6ef0b042-ff4f-49b7-aa27-350439b42e37] Running
	I0918 18:57:18.034660  648496 system_pods.go:126] duration metric: took 18.644937ms to wait for k8s-apps to be running ...
	I0918 18:57:18.034681  648496 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 18:57:18.034767  648496 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 18:57:18.076951  648496 system_svc.go:56] duration metric: took 42.259018ms WaitForService to wait for kubelet.
	I0918 18:57:18.077028  648496 kubeadm.go:581] duration metric: took 59.210514691s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0918 18:57:18.077065  648496 node_conditions.go:102] verifying NodePressure condition ...
	I0918 18:57:18.081574  648496 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0918 18:57:18.081666  648496 node_conditions.go:123] node cpu capacity is 2
	I0918 18:57:18.081699  648496 node_conditions.go:105] duration metric: took 4.613532ms to run NodePressure ...
	I0918 18:57:18.081741  648496 start.go:228] waiting for startup goroutines ...
	I0918 18:57:18.305992  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:18.375009  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:18.417430  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:18.439056  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:18.806367  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:18.873009  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:18.916642  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:18.933491  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:19.306797  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:19.372435  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:19.416787  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:19.433120  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:19.806364  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:19.874180  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:19.918600  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:19.934337  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:20.306710  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:20.375020  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:20.417168  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:20.433590  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:20.811180  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:20.872986  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:20.917300  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:20.932628  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:21.306354  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:21.372919  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:21.416319  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:21.432809  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:21.810054  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:21.873286  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:21.916172  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:21.932684  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:22.306790  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:22.373330  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:22.417659  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:22.433319  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:22.806609  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:22.873132  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:22.918032  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:22.934025  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:23.315567  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:23.381142  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:23.418497  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:23.436876  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:23.807369  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:23.873628  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:23.917352  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:23.937812  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:24.307309  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:24.374125  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:24.417616  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:24.435553  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:24.806977  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:24.873067  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:24.925049  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:24.933628  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:25.309798  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:25.377750  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:25.420566  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:25.434289  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:25.806501  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:25.872959  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:25.916551  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:25.937003  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:26.306882  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:26.373754  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:26.438253  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:26.449160  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0918 18:57:26.806777  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:26.874670  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:26.926869  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:26.933890  648496 kapi.go:107] duration metric: took 1m2.577745074s to wait for kubernetes.io/minikube-addons=registry ...
	I0918 18:57:27.305659  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:27.373676  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:27.418859  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:27.806750  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:27.872233  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:27.916735  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:28.307478  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:28.373092  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:28.416507  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:28.809139  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:28.874249  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:28.922406  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:29.307368  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:29.373282  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:29.416558  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:29.806303  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:29.872973  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:29.919875  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:30.308699  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:30.373261  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:30.417728  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:30.811773  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:30.873465  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:30.917875  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:31.328879  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:31.373629  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:31.418656  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:31.807262  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:31.874746  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:31.917269  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:32.326518  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:32.379111  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:32.417335  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:32.820115  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:32.872955  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:32.917709  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:33.342895  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:33.375514  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:33.420681  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:33.807870  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:33.873095  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:33.917355  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:34.308591  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:34.373976  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:34.418181  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:34.807057  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:34.872510  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:34.917425  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:35.309117  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:35.375381  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:35.417531  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:35.807449  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:35.873109  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:35.919964  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:36.318363  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:36.373049  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:36.417439  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:36.806610  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:36.873530  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:36.918212  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:37.308047  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:37.374030  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:37.417670  648496 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0918 18:57:37.805654  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:37.874369  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:37.918264  648496 kapi.go:107] duration metric: took 1m13.566249636s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0918 18:57:38.312495  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:38.373682  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:38.806415  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:38.873868  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:39.306247  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:39.373070  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:39.808163  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:39.873143  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:40.320775  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:40.373035  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:40.807634  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:40.873351  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:41.305985  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:41.372637  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:41.807432  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:41.877528  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:42.308885  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:42.380667  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:42.806973  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:42.872985  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:43.306234  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:43.372650  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:43.807038  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:43.872762  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:44.306326  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:44.372915  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:44.810602  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:44.873464  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:45.308522  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:45.374476  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:45.806294  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:45.874083  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:46.306679  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:46.373440  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:46.807371  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0918 18:57:46.873034  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:47.306811  648496 kapi.go:107] duration metric: took 1m22.522483742s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0918 18:57:47.372823  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:47.873327  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:48.372318  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:48.872476  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:49.372561  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:49.872389  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:50.372532  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:50.872665  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:51.372709  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:51.873562  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:52.372353  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:52.873194  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:53.372677  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:53.872630  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:54.372800  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:54.875267  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:55.372900  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:55.877685  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:56.372560  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:56.878412  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:57.372279  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:57.873015  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:58.372855  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:58.873589  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:59.373944  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:57:59.874096  648496 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0918 18:58:00.374360  648496 kapi.go:107] duration metric: took 1m31.585287609s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0918 18:58:00.376680  648496 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-351470 cluster.
	I0918 18:58:00.378715  648496 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0918 18:58:00.380591  648496 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0918 18:58:00.382814  648496 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, default-storageclass, inspektor-gadget, metrics-server, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0918 18:58:00.385149  648496 addons.go:502] enable addons completed in 1m41.830459175s: enabled=[cloud-spanner ingress-dns storage-provisioner default-storageclass inspektor-gadget metrics-server volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0918 18:58:00.385211  648496 start.go:233] waiting for cluster config update ...
	I0918 18:58:00.385231  648496 start.go:242] writing updated cluster config ...
	I0918 18:58:00.385570  648496 ssh_runner.go:195] Run: rm -f paused
	I0918 18:58:00.459160  648496 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0918 18:58:00.461669  648496 out.go:177] * Done! kubectl is now configured to use "addons-351470" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 18 18:57:59 addons-351470 crio[892]: time="2023-09-18 18:57:59.772231444Z" level=info msg="Created container 1fd8f42713f99d9bf278658a3bddc08632d8a5184dcf5796706a3fb27d74d102: gcp-auth/gcp-auth-d4c87556c-7tlfs/gcp-auth" id=e19be97b-3c4e-4b5b-8639-596489d62248 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 18 18:57:59 addons-351470 crio[892]: time="2023-09-18 18:57:59.773027296Z" level=info msg="Starting container: 1fd8f42713f99d9bf278658a3bddc08632d8a5184dcf5796706a3fb27d74d102" id=2e21cb6f-315e-442c-b60c-1b80a2b1f030 name=/runtime.v1.RuntimeService/StartContainer
	Sep 18 18:57:59 addons-351470 crio[892]: time="2023-09-18 18:57:59.788649136Z" level=info msg="Started container" PID=5377 containerID=1fd8f42713f99d9bf278658a3bddc08632d8a5184dcf5796706a3fb27d74d102 description=gcp-auth/gcp-auth-d4c87556c-7tlfs/gcp-auth id=2e21cb6f-315e-442c-b60c-1b80a2b1f030 name=/runtime.v1.RuntimeService/StartContainer sandboxID=639a7f7dd8df660dd0afff72be09c65c756bbafaeaf2141c629f0ad3d34e083a
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.223662681Z" level=info msg="Stopping pod sandbox: 8543a822bc1ccbc67c00fecaf3047bb59ffb49ae760c4366289536e8197dfa60" id=df6c342a-c3b7-43fc-a26e-b3566c935305 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.223988098Z" level=info msg="Got pod network &{Name:cloud-spanner-emulator-7d49f968d9-2wm2h Namespace:default ID:8543a822bc1ccbc67c00fecaf3047bb59ffb49ae760c4366289536e8197dfa60 UID:4757cd07-5aa4-4fb4-b4be-af4087e07f4f NetNS:/var/run/netns/52f3337b-9d53-4d8d-b42e-70562a335b1f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.224122950Z" level=info msg="Deleting pod default_cloud-spanner-emulator-7d49f968d9-2wm2h from CNI network \"kindnet\" (type=ptp)"
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.268845991Z" level=info msg="Stopped pod sandbox: 8543a822bc1ccbc67c00fecaf3047bb59ffb49ae760c4366289536e8197dfa60" id=df6c342a-c3b7-43fc-a26e-b3566c935305 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.575275910Z" level=info msg="Removing container: ccd2c967eda2e96854237ab8a74c1a129efd530198405b0d1575510a228f1358" id=05aa9997-bc46-46e0-aaf4-c6696a8d0cbd name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.612584115Z" level=info msg="Removed container ccd2c967eda2e96854237ab8a74c1a129efd530198405b0d1575510a228f1358: default/cloud-spanner-emulator-7d49f968d9-2wm2h/cloud-spanner-emulator" id=05aa9997-bc46-46e0-aaf4-c6696a8d0cbd name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.614798437Z" level=info msg="Removing container: 51c57c39bd60ec564dafd71add86818a5ad06e3d4ce63362e3cebd2d6082815f" id=7325e020-dd8f-4a6b-b8a6-4896a76914a1 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.658097002Z" level=info msg="Removed container 51c57c39bd60ec564dafd71add86818a5ad06e3d4ce63362e3cebd2d6082815f: gcp-auth/gcp-auth-certs-patch-jnx76/patch" id=7325e020-dd8f-4a6b-b8a6-4896a76914a1 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.659684761Z" level=info msg="Removing container: aab74105c866af4f4563c3966bf50c6dcf807e9d4aaab514a2c1c3d2856983fb" id=74845448-5715-451a-9a73-e0b86b50856e name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.693613083Z" level=info msg="Removed container aab74105c866af4f4563c3966bf50c6dcf807e9d4aaab514a2c1c3d2856983fb: gcp-auth/gcp-auth-certs-create-4n5tw/create" id=74845448-5715-451a-9a73-e0b86b50856e name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.695224472Z" level=info msg="Stopping pod sandbox: c04b986fa909aee78880ec9d71323c9ea49b908037fe062d94d66e4315a1dc80" id=9b0c8521-99ae-459c-b9d5-73c387affdeb name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.695284509Z" level=info msg="Stopped pod sandbox (already stopped): c04b986fa909aee78880ec9d71323c9ea49b908037fe062d94d66e4315a1dc80" id=9b0c8521-99ae-459c-b9d5-73c387affdeb name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.695633082Z" level=info msg="Removing pod sandbox: c04b986fa909aee78880ec9d71323c9ea49b908037fe062d94d66e4315a1dc80" id=c2e059ff-6807-4532-a269-3023af826862 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.705613084Z" level=info msg="Removed pod sandbox: c04b986fa909aee78880ec9d71323c9ea49b908037fe062d94d66e4315a1dc80" id=c2e059ff-6807-4532-a269-3023af826862 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.706272797Z" level=info msg="Stopping pod sandbox: 8ef3a911643e3d0b488895eae60c89d34e11fb023417f4403c5e0d7461417a90" id=eaffa658-45de-4a10-81a5-db16d54bd8b2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.706309515Z" level=info msg="Stopped pod sandbox (already stopped): 8ef3a911643e3d0b488895eae60c89d34e11fb023417f4403c5e0d7461417a90" id=eaffa658-45de-4a10-81a5-db16d54bd8b2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.706574058Z" level=info msg="Removing pod sandbox: 8ef3a911643e3d0b488895eae60c89d34e11fb023417f4403c5e0d7461417a90" id=17bcce38-7c1b-4b24-97a1-0fa0125998e7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.715028226Z" level=info msg="Removed pod sandbox: 8ef3a911643e3d0b488895eae60c89d34e11fb023417f4403c5e0d7461417a90" id=17bcce38-7c1b-4b24-97a1-0fa0125998e7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.715568611Z" level=info msg="Stopping pod sandbox: 8543a822bc1ccbc67c00fecaf3047bb59ffb49ae760c4366289536e8197dfa60" id=5b77e717-0956-49f2-8bbc-511ae9075495 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.715698253Z" level=info msg="Stopped pod sandbox (already stopped): 8543a822bc1ccbc67c00fecaf3047bb59ffb49ae760c4366289536e8197dfa60" id=5b77e717-0956-49f2-8bbc-511ae9075495 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.716806411Z" level=info msg="Removing pod sandbox: 8543a822bc1ccbc67c00fecaf3047bb59ffb49ae760c4366289536e8197dfa60" id=c06a8541-cf75-4e8f-9421-42712e7300d9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 18 18:58:06 addons-351470 crio[892]: time="2023-09-18 18:58:06.724947970Z" level=info msg="Removed pod sandbox: 8543a822bc1ccbc67c00fecaf3047bb59ffb49ae760c4366289536e8197dfa60" id=c06a8541-cf75-4e8f-9421-42712e7300d9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	1fd8f42713f99       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                                 8 seconds ago        Running             gcp-auth                                 0                   639a7f7dd8df6       gcp-auth-d4c87556c-7tlfs
	e259a9b96620d       1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a                                                                             12 seconds ago       Exited              minikube-ingress-dns                     3                   ef1d33011851a       kube-ingress-dns-minikube
	3baca1f7777bf       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          22 seconds ago       Running             csi-snapshotter                          0                   1cd91e55b90d7       csi-hostpathplugin-cknjm
	50144b7e55c5d       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          25 seconds ago       Running             csi-provisioner                          0                   1cd91e55b90d7       csi-hostpathplugin-cknjm
	7325764fc9c67       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            27 seconds ago       Running             liveness-probe                           0                   1cd91e55b90d7       csi-hostpathplugin-cknjm
	e0d464e332307       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           29 seconds ago       Running             hostpath                                 0                   1cd91e55b90d7       csi-hostpathplugin-cknjm
	d5f248fa30b79       registry.k8s.io/ingress-nginx/controller@sha256:6f5dc094109641d694359903ad34e32bfd57cae94b17766fba2952400fa1207a                             31 seconds ago       Running             controller                               0                   ae71f63696a73       ingress-nginx-controller-798b8b85d7-dgtcd
	3e677433b5b07       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                39 seconds ago       Running             node-driver-registrar                    0                   1cd91e55b90d7       csi-hostpathplugin-cknjm
	616c613f76e22       gcr.io/k8s-minikube/kube-registry-proxy@sha256:d9de4e135913fc5254f1299e25fe71cd402f9176ddb3fffb1527775b7224f621                              41 seconds ago       Running             registry-proxy                           0                   4162b38dca87e       registry-proxy-gzc8v
	4a8ef7d1c7b42       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      47 seconds ago       Running             volume-snapshot-controller               0                   a9c7d78050707       snapshot-controller-58dbcc7b99-wzvtx
	175f41a8a3fa4       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              47 seconds ago       Running             csi-resizer                              0                   82a69db598c3e       csi-hostpath-resizer-0
	909fd770ecd17       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b                   49 seconds ago       Exited              patch                                    0                   8b9d4ffccd3ce       ingress-nginx-admission-patch-ghcbs
	9c6ec3fa3fbb9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b                   50 seconds ago       Exited              create                                   0                   0f5ae06c0dd24       ingress-nginx-admission-create-ts7wq
	2d89c0a605330       registry.k8s.io/metrics-server/metrics-server@sha256:401a4b9796f3f80c1f03d22cd7b1a26839f515a36032ef49b682c237e5848ab3                        53 seconds ago       Running             metrics-server                           0                   133dc84c8f645       metrics-server-7c66d45ddc-z9mjl
	584e064431c03       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   About a minute ago   Running             csi-external-health-monitor-controller   0                   1cd91e55b90d7       csi-hostpathplugin-cknjm
	1b94bbb6d7158       docker.io/library/registry@sha256:561ecbab74a78b52e52585ced90a46293b0923ad1faa380b73ab2335aee444c3                                           About a minute ago   Running             registry                                 0                   812d587daaded       registry-9gb28
	2496ab92ce3b3       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   67d9d71831cc1       csi-hostpath-attacher-0
	54c9c80f37d71       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   f40567439389f       snapshot-controller-58dbcc7b99-9bh9g
	ff91e59590532       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   f67f4b59fff1b       storage-provisioner
	09640ab64c9e3       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                                             About a minute ago   Running             coredns                                  0                   dfa5b24be9c2e       coredns-5dd5756b68-hfcps
	167ee46f9d959       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7                            About a minute ago   Running             gadget                                   0                   29386755dbdbd       gadget-sj9pr
	bc3bc7a2efc34       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                                             About a minute ago   Running             kindnet-cni                              0                   25ad99282e234       kindnet-ndjjv
	9b74f354c3e42       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa                                                                             About a minute ago   Running             kube-proxy                               0                   304dabec2b3c4       kube-proxy-f7vqg
	2e4f9411a1317       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c                                                                             2 minutes ago        Running             kube-apiserver                           0                   36f2aa42dd8f6       kube-apiserver-addons-351470
	aa7951c2ccd7b       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                                             2 minutes ago        Running             etcd                                     0                   b7e51139bb281       etcd-addons-351470
	b9a946790be0a       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7                                                                             2 minutes ago        Running             kube-scheduler                           0                   d1bc0a4f2eea8       kube-scheduler-addons-351470
	3d0df458cb176       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c                                                                             2 minutes ago        Running             kube-controller-manager                  0                   5b30b1c7cb5a5       kube-controller-manager-addons-351470
	
	* 
	* ==> coredns [09640ab64c9e3fd8591f1e9c07e99e93fae53168af85c6d774a91d832d0b236e] <==
	* [INFO] 10.244.0.13:39707 - 327 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009961s
	[INFO] 10.244.0.13:41853 - 56436 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002447242s
	[INFO] 10.244.0.13:41853 - 40050 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00295223s
	[INFO] 10.244.0.13:47497 - 34052 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000138683s
	[INFO] 10.244.0.13:47497 - 1543 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000106962s
	[INFO] 10.244.0.13:59983 - 62230 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000124637s
	[INFO] 10.244.0.13:59983 - 39186 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000054187s
	[INFO] 10.244.0.13:41919 - 29357 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000065838s
	[INFO] 10.244.0.13:41919 - 34382 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000040804s
	[INFO] 10.244.0.13:55415 - 45796 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00011607s
	[INFO] 10.244.0.13:55415 - 32482 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000116128s
	[INFO] 10.244.0.13:60413 - 20683 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001503418s
	[INFO] 10.244.0.13:60413 - 64457 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001735649s
	[INFO] 10.244.0.13:34400 - 56883 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000096575s
	[INFO] 10.244.0.13:34400 - 16974 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000221006s
	[INFO] 10.244.0.17:54910 - 22248 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000184485s
	[INFO] 10.244.0.17:44066 - 35599 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000373721s
	[INFO] 10.244.0.17:43945 - 27194 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000255098s
	[INFO] 10.244.0.17:46148 - 53141 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000197572s
	[INFO] 10.244.0.17:37493 - 35623 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000118737s
	[INFO] 10.244.0.17:40849 - 55352 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000121781s
	[INFO] 10.244.0.17:43278 - 41512 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002354065s
	[INFO] 10.244.0.17:46630 - 64065 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002420486s
	[INFO] 10.244.0.17:40414 - 40874 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001078463s
	[INFO] 10.244.0.17:37277 - 16920 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.000664603s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-351470
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-351470
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36
	                    minikube.k8s.io/name=addons-351470
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_18T18_56_07_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-351470
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-351470"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Sep 2023 18:56:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-351470
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Sep 2023 18:57:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Sep 2023 18:57:38 +0000   Mon, 18 Sep 2023 18:56:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Sep 2023 18:57:38 +0000   Mon, 18 Sep 2023 18:56:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Sep 2023 18:57:38 +0000   Mon, 18 Sep 2023 18:56:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Sep 2023 18:57:38 +0000   Mon, 18 Sep 2023 18:56:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-351470
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa1a1936bf2243d08bf379cb71b1e695
	  System UUID:                c5c6d9d6-2050-47b3-8715-c5c8506037d3
	  Boot ID:                    43cd75a3-7352-4de5-a11c-da52fa8117dc
	  Kernel Version:             5.15.0-1044-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  gadget                      gadget-sj9pr                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  gcp-auth                    gcp-auth-d4c87556c-7tlfs                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  ingress-nginx               ingress-nginx-controller-798b8b85d7-dgtcd    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         104s
	  kube-system                 coredns-5dd5756b68-hfcps                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     108s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 csi-hostpathplugin-cknjm                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 etcd-addons-351470                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         2m2s
	  kube-system                 kindnet-ndjjv                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      109s
	  kube-system                 kube-apiserver-addons-351470                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-controller-manager-addons-351470        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-proxy-f7vqg                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kube-scheduler-addons-351470                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 metrics-server-7c66d45ddc-z9mjl              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         105s
	  kube-system                 registry-9gb28                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 registry-proxy-gzc8v                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 snapshot-controller-58dbcc7b99-9bh9g         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 snapshot-controller-58dbcc7b99-wzvtx         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%!)(MISSING)  100m (5%!)(MISSING)
	  memory             510Mi (6%!)(MISSING)   220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 104s                   kube-proxy       
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node addons-351470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node addons-351470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node addons-351470 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m2s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m2s                   kubelet          Node addons-351470 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s                   kubelet          Node addons-351470 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s                   kubelet          Node addons-351470 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           110s                   node-controller  Node addons-351470 event: Registered Node addons-351470 in Controller
	  Normal  NodeReady                76s                    kubelet          Node addons-351470 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.000693] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000932] FS-Cache: N-cookie d=000000003f524057{9p.inode} n=000000000dcb9a4e
	[  +0.001115] FS-Cache: N-key=[8] 'd06eed0000000000'
	[  +0.003589] FS-Cache: Duplicate cookie detected
	[  +0.000756] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001018] FS-Cache: O-cookie d=000000003f524057{9p.inode} n=000000001afa753f
	[  +0.001043] FS-Cache: O-key=[8] 'd06eed0000000000'
	[  +0.000719] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000926] FS-Cache: N-cookie d=000000003f524057{9p.inode} n=0000000041e09c7b
	[  +0.001050] FS-Cache: N-key=[8] 'd06eed0000000000'
	[  +2.717536] FS-Cache: Duplicate cookie detected
	[  +0.000756] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.001073] FS-Cache: O-cookie d=000000003f524057{9p.inode} n=00000000bf75ac96
	[  +0.001037] FS-Cache: O-key=[8] 'cf6eed0000000000'
	[  +0.000781] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000928] FS-Cache: N-cookie d=000000003f524057{9p.inode} n=000000000dcb9a4e
	[  +0.001071] FS-Cache: N-key=[8] 'cf6eed0000000000'
	[  +0.385708] FS-Cache: Duplicate cookie detected
	[  +0.000766] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.000934] FS-Cache: O-cookie d=000000003f524057{9p.inode} n=0000000078afb02a
	[  +0.001146] FS-Cache: O-key=[8] 'd56eed0000000000'
	[  +0.000766] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000936] FS-Cache: N-cookie d=000000003f524057{9p.inode} n=0000000051123a3d
	[  +0.001113] FS-Cache: N-key=[8] 'd56eed0000000000'
	[ +26.862938] new mount options do not match the existing superblock, will be ignored
	
	* 
	* ==> etcd [aa7951c2ccd7ba064436381289baf8c319bc9403a2669b598c4ea318e47aad2e] <==
	* {"level":"info","ts":"2023-09-18T18:56:19.245064Z","caller":"traceutil/trace.go:171","msg":"trace[2035749418] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:350; }","duration":"184.447071ms","start":"2023-09-18T18:56:19.060604Z","end":"2023-09-18T18:56:19.245051Z","steps":["trace[2035749418] 'agreement among raft nodes before linearized reading'  (duration: 181.747995ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-18T18:56:19.245266Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"184.689148ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2023-09-18T18:56:19.245296Z","caller":"traceutil/trace.go:171","msg":"trace[1487117599] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:350; }","duration":"184.723158ms","start":"2023-09-18T18:56:19.060567Z","end":"2023-09-18T18:56:19.24529Z","steps":["trace[1487117599] 'agreement among raft nodes before linearized reading'  (duration: 176.561997ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-18T18:56:19.396538Z","caller":"traceutil/trace.go:171","msg":"trace[710800917] transaction","detail":"{read_only:false; response_revision:357; number_of_response:1; }","duration":"107.00896ms","start":"2023-09-18T18:56:19.289494Z","end":"2023-09-18T18:56:19.396503Z","steps":["trace[710800917] 'process raft request'  (duration: 106.9605ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-18T18:56:19.396953Z","caller":"traceutil/trace.go:171","msg":"trace[2068315121] transaction","detail":"{read_only:false; response_revision:356; number_of_response:1; }","duration":"128.802083ms","start":"2023-09-18T18:56:19.26814Z","end":"2023-09-18T18:56:19.396942Z","steps":["trace[2068315121] 'process raft request'  (duration: 125.107063ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-18T18:56:21.638119Z","caller":"traceutil/trace.go:171","msg":"trace[962160822] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"125.255429ms","start":"2023-09-18T18:56:21.512848Z","end":"2023-09-18T18:56:21.638104Z","steps":["trace[962160822] 'process raft request'  (duration: 119.042315ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-18T18:56:22.146153Z","caller":"traceutil/trace.go:171","msg":"trace[963383276] linearizableReadLoop","detail":"{readStateIndex:402; appliedIndex:401; }","duration":"174.106508ms","start":"2023-09-18T18:56:21.972022Z","end":"2023-09-18T18:56:22.146129Z","steps":["trace[963383276] 'read index received'  (duration: 136.932836ms)","trace[963383276] 'applied index is now lower than readState.Index'  (duration: 37.171728ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-18T18:56:22.177815Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.712333ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-18T18:56:22.178677Z","caller":"traceutil/trace.go:171","msg":"trace[419712821] range","detail":"{range_begin:/registry/services/specs/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:391; }","duration":"208.582664ms","start":"2023-09-18T18:56:21.970076Z","end":"2023-09-18T18:56:22.178659Z","steps":["trace[419712821] 'agreement among raft nodes before linearized reading'  (duration: 176.122905ms)","trace[419712821] 'range keys from in-memory index tree'  (duration: 30.588397ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-18T18:56:22.210504Z","caller":"traceutil/trace.go:171","msg":"trace[561903903] transaction","detail":"{read_only:false; response_revision:392; number_of_response:1; }","duration":"120.034591ms","start":"2023-09-18T18:56:22.080856Z","end":"2023-09-18T18:56:22.200891Z","steps":["trace[561903903] 'process raft request'  (duration: 50.295953ms)","trace[561903903] 'get key's previous created_revision and leaseID' {req_type:put; key:/registry/replicasets/kube-system/coredns-5dd5756b68; req_size:3734; } (duration: 15.705088ms)","trace[561903903] 'marshal mvccpb.KeyValue' {req_type:put; key:/registry/replicasets/kube-system/coredns-5dd5756b68; req_size:3734; } (duration: 35.622017ms)"],"step_count":3}
	{"level":"info","ts":"2023-09-18T18:56:22.182975Z","caller":"traceutil/trace.go:171","msg":"trace[558194179] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"212.821055ms","start":"2023-09-18T18:56:21.970136Z","end":"2023-09-18T18:56:22.182957Z","steps":["trace[558194179] 'process raft request'  (duration: 138.904482ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-18T18:56:22.21763Z","caller":"traceutil/trace.go:171","msg":"trace[923681165] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"126.730367ms","start":"2023-09-18T18:56:22.090885Z","end":"2023-09-18T18:56:22.217615Z","steps":["trace[923681165] 'process raft request'  (duration: 91.934504ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-18T18:56:22.216903Z","caller":"traceutil/trace.go:171","msg":"trace[1575384881] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"125.751423ms","start":"2023-09-18T18:56:22.091136Z","end":"2023-09-18T18:56:22.216887Z","steps":["trace[1575384881] 'process raft request'  (duration: 92.000777ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-18T18:56:22.276021Z","caller":"traceutil/trace.go:171","msg":"trace[654307322] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"165.139762ms","start":"2023-09-18T18:56:22.110857Z","end":"2023-09-18T18:56:22.275997Z","steps":["trace[654307322] 'process raft request'  (duration: 105.836842ms)","trace[654307322] 'compare'  (duration: 40.675739ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-18T18:56:22.276415Z","caller":"traceutil/trace.go:171","msg":"trace[875341937] linearizableReadLoop","detail":"{readStateIndex:406; appliedIndex:405; }","duration":"118.063922ms","start":"2023-09-18T18:56:22.158342Z","end":"2023-09-18T18:56:22.276406Z","steps":["trace[875341937] 'read index received'  (duration: 47.150221ms)","trace[875341937] 'applied index is now lower than readState.Index'  (duration: 70.912692ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-18T18:56:22.276973Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.84103ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/default/\" range_end:\"/registry/limitranges/default0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-18T18:56:22.29902Z","caller":"traceutil/trace.go:171","msg":"trace[612501840] range","detail":"{range_begin:/registry/limitranges/default/; range_end:/registry/limitranges/default0; response_count:0; response_revision:397; }","duration":"151.873998ms","start":"2023-09-18T18:56:22.147114Z","end":"2023-09-18T18:56:22.298988Z","steps":["trace[612501840] 'agreement among raft nodes before linearized reading'  (duration: 129.824226ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-18T18:56:22.277006Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.322588ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-18T18:56:22.299247Z","caller":"traceutil/trace.go:171","msg":"trace[2039677809] range","detail":"{range_begin:/registry/clusterroles/minikube-ingress-dns; range_end:; response_count:0; response_revision:397; }","duration":"188.551307ms","start":"2023-09-18T18:56:22.110679Z","end":"2023-09-18T18:56:22.29923Z","steps":["trace[2039677809] 'agreement among raft nodes before linearized reading'  (duration: 166.313152ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-18T18:56:22.277027Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.932675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-18T18:56:22.29938Z","caller":"traceutil/trace.go:171","msg":"trace[835302840] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:397; }","duration":"208.283134ms","start":"2023-09-18T18:56:22.09109Z","end":"2023-09-18T18:56:22.299373Z","steps":["trace[835302840] 'agreement among raft nodes before linearized reading'  (duration: 185.923723ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-18T18:56:22.27706Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.250624ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-18T18:56:22.299531Z","caller":"traceutil/trace.go:171","msg":"trace[1036173628] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:397; }","duration":"218.728493ms","start":"2023-09-18T18:56:22.080796Z","end":"2023-09-18T18:56:22.299524Z","steps":["trace[1036173628] 'agreement among raft nodes before linearized reading'  (duration: 196.241927ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-18T18:56:22.300118Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.772043ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllers/kube-system/registry\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-18T18:56:22.300163Z","caller":"traceutil/trace.go:171","msg":"trace[547363066] range","detail":"{range_begin:/registry/controllers/kube-system/registry; range_end:; response_count:0; response_revision:398; }","duration":"103.829094ms","start":"2023-09-18T18:56:22.196328Z","end":"2023-09-18T18:56:22.300157Z","steps":["trace[547363066] 'agreement among raft nodes before linearized reading'  (duration: 103.743341ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [1fd8f42713f99d9bf278658a3bddc08632d8a5184dcf5796706a3fb27d74d102] <==
	* 2023/09/18 18:57:59 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  18:58:08 up  2:40,  0 users,  load average: 1.57, 2.06, 2.22
	Linux addons-351470 5.15.0-1044-aws #49~20.04.1-Ubuntu SMP Mon Aug 21 17:10:24 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [bc3bc7a2efc3455ae2c556d097f198d3b762b97baec3db87dafb598883ba6f4f] <==
	* I0918 18:56:21.846204       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0918 18:56:21.846339       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0918 18:56:21.846837       1 main.go:116] setting mtu 1500 for CNI 
	I0918 18:56:21.846863       1 main.go:146] kindnetd IP family: "ipv4"
	I0918 18:56:21.848113       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0918 18:56:52.336980       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0918 18:56:52.352017       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 18:56:52.352051       1 main.go:227] handling current node
	I0918 18:57:02.366792       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 18:57:02.366819       1 main.go:227] handling current node
	I0918 18:57:12.388585       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 18:57:12.388620       1 main.go:227] handling current node
	I0918 18:57:22.392794       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 18:57:22.392821       1 main.go:227] handling current node
	I0918 18:57:32.405533       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 18:57:32.405557       1 main.go:227] handling current node
	I0918 18:57:42.409674       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 18:57:42.409880       1 main.go:227] handling current node
	I0918 18:57:52.421671       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 18:57:52.421700       1 main.go:227] handling current node
	I0918 18:58:02.425753       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 18:58:02.425780       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [2e4f9411a13174f2468bbd89045133116db4a2be404c11eed2c4dd236d814c07] <==
	* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0918 18:56:28.629035       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.107.212.184"}
	E0918 18:56:33.326222       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E0918 18:56:43.327651       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","workload-high","workload-low","global-default","catch-all","exempt","system","node-high"] items=[{},{},{},{},{},{},{},{}]
	W0918 18:56:52.718023       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.212.184:443: connect: connection refused
	E0918 18:56:52.718139       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.212.184:443: connect: connection refused
	W0918 18:56:52.719076       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.212.184:443: connect: connection refused
	E0918 18:56:52.719179       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.212.184:443: connect: connection refused
	E0918 18:56:53.336917       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	I0918 18:57:03.061537       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0918 18:57:03.338315       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","global-default","catch-all","exempt","system","node-high","leader-election","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E0918 18:57:13.339519       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	E0918 18:57:17.605583       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.108.38.193:443/apis/metrics.k8s.io/v1beta1: Get "https://10.108.38.193:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.108.38.193:443: connect: connection refused
	W0918 18:57:17.605731       1 handler_proxy.go:93] no RequestInfo found in the context
	E0918 18:57:17.605805       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0918 18:57:17.692000       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I0918 18:57:17.713554       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0918 18:57:17.715696       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0918 18:57:23.340636       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","global-default","catch-all","exempt","system","node-high","leader-election","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E0918 18:57:33.343152       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E0918 18:57:43.343567       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","catch-all","exempt","system","node-high","leader-election","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E0918 18:57:53.344365       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","global-default","catch-all","exempt","system","node-high","leader-election","workload-high"] items=[{},{},{},{},{},{},{},{}]
	I0918 18:58:03.066337       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0918 18:58:03.345081       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","global-default","catch-all","exempt","system","node-high","leader-election"] items=[{},{},{},{},{},{},{},{}]
	
	* 
	* ==> kube-controller-manager [3d0df458cb176112d7f53062169e23e1749c27999712b327814b4c98c095df80] <==
	* I0918 18:57:29.649650       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0918 18:57:29.683030       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0918 18:57:29.900544       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0918 18:57:29.909055       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0918 18:57:29.916862       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0918 18:57:29.920939       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0918 18:57:29.934488       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0918 18:57:29.942794       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0918 18:57:29.949522       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0918 18:57:29.949908       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0918 18:57:32.810127       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-7d49f968d9" duration="63.508µs"
	I0918 18:57:36.599557       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="7.946367ms"
	I0918 18:57:36.599678       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="85.777µs"
	I0918 18:57:37.812216       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-798b8b85d7" duration="65.297µs"
	I0918 18:57:43.702929       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-7d49f968d9" duration="41.526µs"
	I0918 18:57:54.890790       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-798b8b85d7" duration="30.891545ms"
	I0918 18:57:54.891093       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-798b8b85d7" duration="61.063µs"
	I0918 18:57:56.886637       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-7d49f968d9" duration="39.303µs"
	I0918 18:57:59.025889       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0918 18:57:59.033168       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0918 18:57:59.094459       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0918 18:57:59.102372       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0918 18:57:59.906385       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="10.887569ms"
	I0918 18:57:59.907296       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="42.593µs"
	I0918 18:58:06.194854       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-7d49f968d9" duration="4.85µs"
	
	* 
	* ==> kube-proxy [9b74f354c3e42b0a24d3b6ed9117840479ba1971081f7d182d6e3d55af67b335] <==
	* I0918 18:56:23.655695       1 server_others.go:69] "Using iptables proxy"
	I0918 18:56:23.724114       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0918 18:56:23.825182       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0918 18:56:23.830560       1 server_others.go:152] "Using iptables Proxier"
	I0918 18:56:23.830783       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0918 18:56:23.830880       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0918 18:56:23.830936       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0918 18:56:23.831176       1 server.go:846] "Version info" version="v1.28.2"
	I0918 18:56:23.831186       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 18:56:23.834170       1 config.go:188] "Starting service config controller"
	I0918 18:56:23.834571       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0918 18:56:23.834678       1 config.go:97] "Starting endpoint slice config controller"
	I0918 18:56:23.834712       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0918 18:56:23.835268       1 config.go:315] "Starting node config controller"
	I0918 18:56:23.836463       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0918 18:56:23.935346       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0918 18:56:23.935480       1 shared_informer.go:318] Caches are synced for service config
	I0918 18:56:23.938858       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [b9a946790be0ac81dae9060f7ee78cb6ec1b785ba8f4f3c6bd3c17f0779af07e] <==
	* W0918 18:56:03.285621       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 18:56:03.285658       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0918 18:56:03.285738       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 18:56:03.285774       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0918 18:56:03.285853       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 18:56:03.285888       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0918 18:56:03.285956       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 18:56:03.285991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0918 18:56:03.312088       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 18:56:03.312714       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 18:56:04.160069       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 18:56:04.160196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0918 18:56:04.215613       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 18:56:04.215735       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0918 18:56:04.225477       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 18:56:04.225585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0918 18:56:04.233759       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 18:56:04.233917       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0918 18:56:04.269060       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 18:56:04.269176       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0918 18:56:04.282826       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 18:56:04.282959       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0918 18:56:04.407820       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 18:56:04.407883       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0918 18:56:05.964055       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Sep 18 18:57:46 addons-351470 kubelet[1355]: I0918 18:57:46.849374    1355 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-cknjm" podStartSLOduration=2.28946448 podCreationTimestamp="2023-09-18 18:56:52 +0000 UTC" firstStartedPulling="2023-09-18 18:56:53.25877762 +0000 UTC m=+46.827629137" lastFinishedPulling="2023-09-18 18:57:45.818645918 +0000 UTC m=+99.387497436" observedRunningTime="2023-09-18 18:57:46.848829596 +0000 UTC m=+100.417681122" watchObservedRunningTime="2023-09-18 18:57:46.849332779 +0000 UTC m=+100.418184296"
	Sep 18 18:57:55 addons-351470 kubelet[1355]: I0918 18:57:55.687933    1355 kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-7d49f968d9-2wm2h" secret="" err="secret \"gcp-auth\" not found"
	Sep 18 18:57:55 addons-351470 kubelet[1355]: I0918 18:57:55.688079    1355 scope.go:117] "RemoveContainer" containerID="ff17fdd028d234485d7e76def0e5d550958dde8495c42fb39d9f75a5c6a3cd3b"
	Sep 18 18:57:55 addons-351470 kubelet[1355]: I0918 18:57:55.688949    1355 scope.go:117] "RemoveContainer" containerID="807db0ca635566ee9baed73c1cd45bcd2327294ecec898a1a9fe5bcf64c59335"
	Sep 18 18:57:56 addons-351470 kubelet[1355]: I0918 18:57:56.859630    1355 scope.go:117] "RemoveContainer" containerID="807db0ca635566ee9baed73c1cd45bcd2327294ecec898a1a9fe5bcf64c59335"
	Sep 18 18:57:56 addons-351470 kubelet[1355]: I0918 18:57:56.860037    1355 kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-7d49f968d9-2wm2h" secret="" err="secret \"gcp-auth\" not found"
	Sep 18 18:57:56 addons-351470 kubelet[1355]: I0918 18:57:56.860068    1355 scope.go:117] "RemoveContainer" containerID="ccd2c967eda2e96854237ab8a74c1a129efd530198405b0d1575510a228f1358"
	Sep 18 18:57:56 addons-351470 kubelet[1355]: E0918 18:57:56.861126    1355 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloud-spanner-emulator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cloud-spanner-emulator pod=cloud-spanner-emulator-7d49f968d9-2wm2h_default(4757cd07-5aa4-4fb4-b4be-af4087e07f4f)\"" pod="default/cloud-spanner-emulator-7d49f968d9-2wm2h" podUID="4757cd07-5aa4-4fb4-b4be-af4087e07f4f"
	Sep 18 18:57:56 addons-351470 kubelet[1355]: I0918 18:57:56.863421    1355 scope.go:117] "RemoveContainer" containerID="e259a9b96620de8c65296ac7e19746227c99d8938415365981914197971ea99c"
	Sep 18 18:57:56 addons-351470 kubelet[1355]: E0918 18:57:56.863662    1355 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minikube-ingress-dns\" with CrashLoopBackOff: \"back-off 40s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(38dc4cd8-2f37-4e52-a13c-99f3ec43b6b1)\"" pod="kube-system/kube-ingress-dns-minikube" podUID="38dc4cd8-2f37-4e52-a13c-99f3ec43b6b1"
	Sep 18 18:57:56 addons-351470 kubelet[1355]: I0918 18:57:56.893927    1355 scope.go:117] "RemoveContainer" containerID="ff17fdd028d234485d7e76def0e5d550958dde8495c42fb39d9f75a5c6a3cd3b"
	Sep 18 18:58:00 addons-351470 kubelet[1355]: I0918 18:58:00.689422    1355 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b78cea60-5913-42d7-a1c5-4f1a2c9a8b19" path="/var/lib/kubelet/pods/b78cea60-5913-42d7-a1c5-4f1a2c9a8b19/volumes"
	Sep 18 18:58:00 addons-351470 kubelet[1355]: I0918 18:58:00.689813    1355 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e31bcd22-d095-4aac-9c26-a6698a318209" path="/var/lib/kubelet/pods/e31bcd22-d095-4aac-9c26-a6698a318209/volumes"
	Sep 18 18:58:06 addons-351470 kubelet[1355]: I0918 18:58:06.222369    1355 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="gcp-auth/gcp-auth-d4c87556c-7tlfs" podStartSLOduration=95.539853122 podCreationTimestamp="2023-09-18 18:56:28 +0000 UTC" firstStartedPulling="2023-09-18 18:57:56.989304496 +0000 UTC m=+110.558156014" lastFinishedPulling="2023-09-18 18:57:59.671773223 +0000 UTC m=+113.240624741" observedRunningTime="2023-09-18 18:57:59.897575228 +0000 UTC m=+113.466426754" watchObservedRunningTime="2023-09-18 18:58:06.222321849 +0000 UTC m=+119.791173367"
	Sep 18 18:58:06 addons-351470 kubelet[1355]: I0918 18:58:06.375446    1355 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8859f\" (UniqueName: \"kubernetes.io/projected/4757cd07-5aa4-4fb4-b4be-af4087e07f4f-kube-api-access-8859f\") pod \"4757cd07-5aa4-4fb4-b4be-af4087e07f4f\" (UID: \"4757cd07-5aa4-4fb4-b4be-af4087e07f4f\") "
	Sep 18 18:58:06 addons-351470 kubelet[1355]: I0918 18:58:06.380369    1355 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4757cd07-5aa4-4fb4-b4be-af4087e07f4f-kube-api-access-8859f" (OuterVolumeSpecName: "kube-api-access-8859f") pod "4757cd07-5aa4-4fb4-b4be-af4087e07f4f" (UID: "4757cd07-5aa4-4fb4-b4be-af4087e07f4f"). InnerVolumeSpecName "kube-api-access-8859f". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 18 18:58:06 addons-351470 kubelet[1355]: I0918 18:58:06.476508    1355 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-8859f\" (UniqueName: \"kubernetes.io/projected/4757cd07-5aa4-4fb4-b4be-af4087e07f4f-kube-api-access-8859f\") on node \"addons-351470\" DevicePath \"\""
	Sep 18 18:58:06 addons-351470 kubelet[1355]: I0918 18:58:06.574071    1355 scope.go:117] "RemoveContainer" containerID="ccd2c967eda2e96854237ab8a74c1a129efd530198405b0d1575510a228f1358"
	Sep 18 18:58:06 addons-351470 kubelet[1355]: I0918 18:58:06.612962    1355 scope.go:117] "RemoveContainer" containerID="51c57c39bd60ec564dafd71add86818a5ad06e3d4ce63362e3cebd2d6082815f"
	Sep 18 18:58:06 addons-351470 kubelet[1355]: I0918 18:58:06.658359    1355 scope.go:117] "RemoveContainer" containerID="aab74105c866af4f4563c3966bf50c6dcf807e9d4aaab514a2c1c3d2856983fb"
	Sep 18 18:58:06 addons-351470 kubelet[1355]: E0918 18:58:06.807612    1355 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e9b4663357dd280a901a38d792897f968a311e140d17bee291507a0bc3892ae0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e9b4663357dd280a901a38d792897f968a311e140d17bee291507a0bc3892ae0/diff: no such file or directory, extraDiskErr: <nil>
	Sep 18 18:58:06 addons-351470 kubelet[1355]: E0918 18:58:06.839350    1355 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7cd5695c3fffc37a209f4a557fe128a1eb5a00532466887ca668881ea420d7b5/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7cd5695c3fffc37a209f4a557fe128a1eb5a00532466887ca668881ea420d7b5/diff: no such file or directory, extraDiskErr: <nil>
	Sep 18 18:58:06 addons-351470 kubelet[1355]: E0918 18:58:06.861284    1355 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/e9b4663357dd280a901a38d792897f968a311e140d17bee291507a0bc3892ae0/diff" to get inode usage: stat /var/lib/containers/storage/overlay/e9b4663357dd280a901a38d792897f968a311e140d17bee291507a0bc3892ae0/diff: no such file or directory, extraDiskErr: <nil>
	Sep 18 18:58:06 addons-351470 kubelet[1355]: E0918 18:58:06.891011    1355 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/c11cfd68f63035bf530ecb371a353220066516b7b28183b70e55999afb7a8997/diff" to get inode usage: stat /var/lib/containers/storage/overlay/c11cfd68f63035bf530ecb371a353220066516b7b28183b70e55999afb7a8997/diff: no such file or directory, extraDiskErr: <nil>
	Sep 18 18:58:08 addons-351470 kubelet[1355]: I0918 18:58:08.689871    1355 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4757cd07-5aa4-4fb4-b4be-af4087e07f4f" path="/var/lib/kubelet/pods/4757cd07-5aa4-4fb4-b4be-af4087e07f4f/volumes"
	
	* 
	* ==> storage-provisioner [ff91e59590532e6807bbf0754b199da42ea10c81bbc85bfccd729d1dabf8256b] <==
	* I0918 18:56:53.596983       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 18:56:53.675288       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 18:56:53.675388       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 18:56:53.684952       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 18:56:53.685143       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-351470_fbf979bb-7e69-4903-9fa1-d5de07fb11f6!
	I0918 18:56:53.687458       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"89132dcf-7876-4f00-b25d-71dc01ae6fa5", APIVersion:"v1", ResourceVersion:"827", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-351470_fbf979bb-7e69-4903-9fa1-d5de07fb11f6 became leader
	I0918 18:56:53.786101       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-351470_fbf979bb-7e69-4903-9fa1-d5de07fb11f6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-351470 -n addons-351470
helpers_test.go:261: (dbg) Run:  kubectl --context addons-351470 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-ts7wq ingress-nginx-admission-patch-ghcbs
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-351470 describe pod ingress-nginx-admission-create-ts7wq ingress-nginx-admission-patch-ghcbs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-351470 describe pod ingress-nginx-admission-create-ts7wq ingress-nginx-admission-patch-ghcbs: exit status 1 (89.650587ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ts7wq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ghcbs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-351470 describe pod ingress-nginx-admission-create-ts7wq ingress-nginx-admission-patch-ghcbs: exit status 1
--- FAIL: TestAddons/parallel/Headlamp (3.63s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (180.42s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-407320 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E0918 19:08:28.171165  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-407320 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (14.500544502s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-407320 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-407320 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9c462842-fc5d-426f-a017-25d6d81d3dfd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9c462842-fc5d-426f-a017-25d6d81d3dfd] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.012988107s
addons_test.go:238: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-407320 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0918 19:10:14.596230  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
E0918 19:10:14.601517  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
E0918 19:10:14.611867  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
E0918 19:10:14.632156  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
E0918 19:10:14.672460  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
E0918 19:10:14.752741  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
E0918 19:10:14.913207  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
E0918 19:10:15.233565  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
E0918 19:10:15.874484  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
E0918 19:10:17.155363  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
E0918 19:10:19.716051  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
E0918 19:10:24.836575  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
E0918 19:10:35.077025  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-407320 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.56486182s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-407320 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-407320 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0918 19:10:55.557270  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.010148479s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-407320 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-407320 addons disable ingress-dns --alsologtostderr -v=1: (2.070022969s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-407320 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-407320 addons disable ingress --alsologtostderr -v=1: (7.675303557s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-407320
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-407320:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2e3aa303ac39e15ec5ab1498ee93e37694c170476113e6d0dd72b6ab2c8e1eb5",
	        "Created": "2023-09-18T19:06:43.085023267Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 676191,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-18T19:06:43.407590816Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:560a33002deec07a703a16e2b1dbf6aecde4c0d46aaefa1cb6df4c8c8a7774a7",
	        "ResolvConfPath": "/var/lib/docker/containers/2e3aa303ac39e15ec5ab1498ee93e37694c170476113e6d0dd72b6ab2c8e1eb5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2e3aa303ac39e15ec5ab1498ee93e37694c170476113e6d0dd72b6ab2c8e1eb5/hostname",
	        "HostsPath": "/var/lib/docker/containers/2e3aa303ac39e15ec5ab1498ee93e37694c170476113e6d0dd72b6ab2c8e1eb5/hosts",
	        "LogPath": "/var/lib/docker/containers/2e3aa303ac39e15ec5ab1498ee93e37694c170476113e6d0dd72b6ab2c8e1eb5/2e3aa303ac39e15ec5ab1498ee93e37694c170476113e6d0dd72b6ab2c8e1eb5-json.log",
	        "Name": "/ingress-addon-legacy-407320",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-407320:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-407320",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/012c7590374d57f84396f28fc83d95f8209a481644e2783cfba2741979aed273-init/diff:/var/lib/docker/overlay2/4e03e4714bce8b0ad83859c0e431c5abac0520d3520e787a29bac63ee8779cc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/012c7590374d57f84396f28fc83d95f8209a481644e2783cfba2741979aed273/merged",
	                "UpperDir": "/var/lib/docker/overlay2/012c7590374d57f84396f28fc83d95f8209a481644e2783cfba2741979aed273/diff",
	                "WorkDir": "/var/lib/docker/overlay2/012c7590374d57f84396f28fc83d95f8209a481644e2783cfba2741979aed273/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-407320",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-407320/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-407320",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-407320",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-407320",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d3344ee85159a3a56000ac74a0d2950247564caf807cac36daf1ef444117765d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d3344ee85159",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-407320": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2e3aa303ac39",
	                        "ingress-addon-legacy-407320"
	                    ],
	                    "NetworkID": "e9a09fff803a889239db4d1440eb5d9bd2886d66d70940a6f6edb786cf95af88",
	                    "EndpointID": "22140d0f2fc87537d1a09817073b90a1dd1c526c146c4f95c6e9d53e9b604fb8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-407320 -n ingress-addon-legacy-407320
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-407320 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-407320 logs -n 25: (1.456756318s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-382151 image load --daemon                                  | functional-382151           | jenkins | v1.31.2 | 18 Sep 23 19:06 UTC | 18 Sep 23 19:06 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-382151               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-382151 image ls                                             | functional-382151           | jenkins | v1.31.2 | 18 Sep 23 19:06 UTC | 18 Sep 23 19:06 UTC |
	| image   | functional-382151 image load --daemon                                  | functional-382151           | jenkins | v1.31.2 | 18 Sep 23 19:06 UTC | 18 Sep 23 19:06 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-382151               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-382151 image ls                                             | functional-382151           | jenkins | v1.31.2 | 18 Sep 23 19:06 UTC | 18 Sep 23 19:06 UTC |
	| image   | functional-382151 image save                                           | functional-382151           | jenkins | v1.31.2 | 18 Sep 23 19:06 UTC | 18 Sep 23 19:06 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-382151               |                             |         |         |                     |                     |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-382151 image rm                                             | functional-382151           | jenkins | v1.31.2 | 18 Sep 23 19:06 UTC | 18 Sep 23 19:06 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-382151               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-382151 image ls                                             | functional-382151           | jenkins | v1.31.2 | 18 Sep 23 19:06 UTC | 18 Sep 23 19:06 UTC |
	| image   | functional-382151 image load                                           | functional-382151           | jenkins | v1.31.2 | 18 Sep 23 19:06 UTC | 18 Sep 23 19:06 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-382151 image ls                                             | functional-382151           | jenkins | v1.31.2 | 18 Sep 23 19:06 UTC | 18 Sep 23 19:06 UTC |
	| image   | functional-382151 image save --daemon                                  | functional-382151           | jenkins | v1.31.2 | 18 Sep 23 19:06 UTC | 18 Sep 23 19:06 UTC |
	|         | gcr.io/google-containers/addon-resizer:functional-382151               |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-382151                                                      | functional-382151           | jenkins | v1.31.2 | 18 Sep 23 19:06 UTC | 18 Sep 23 19:06 UTC |
	|         | image ls --format short                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-382151                                                      | functional-382151           | jenkins | v1.31.2 | 18 Sep 23 19:06 UTC | 18 Sep 23 19:06 UTC |
	|         | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh     | functional-382151 ssh pgrep                                            | functional-382151           | jenkins | v1.31.2 | 18 Sep 23 19:06 UTC |                     |
	|         | buildkitd                                                              |                             |         |         |                     |                     |
	| image   | functional-382151                                                      | functional-382151           | jenkins | v1.31.2 | 18 Sep 23 19:06 UTC | 18 Sep 23 19:06 UTC |
	|         | image ls --format json                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-382151                                                      | functional-382151           | jenkins | v1.31.2 | 18 Sep 23 19:06 UTC | 18 Sep 23 19:06 UTC |
	|         | image ls --format table                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image   | functional-382151 image build -t                                       | functional-382151           | jenkins | v1.31.2 | 18 Sep 23 19:06 UTC | 18 Sep 23 19:06 UTC |
	|         | localhost/my-image:functional-382151                                   |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image   | functional-382151 image ls                                             | functional-382151           | jenkins | v1.31.2 | 18 Sep 23 19:06 UTC | 18 Sep 23 19:06 UTC |
	| delete  | -p functional-382151                                                   | functional-382151           | jenkins | v1.31.2 | 18 Sep 23 19:06 UTC | 18 Sep 23 19:06 UTC |
	| start   | -p ingress-addon-legacy-407320                                         | ingress-addon-legacy-407320 | jenkins | v1.31.2 | 18 Sep 23 19:06 UTC | 18 Sep 23 19:08 UTC |
	|         | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|         | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-407320                                            | ingress-addon-legacy-407320 | jenkins | v1.31.2 | 18 Sep 23 19:08 UTC | 18 Sep 23 19:08 UTC |
	|         | addons enable ingress                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-407320                                            | ingress-addon-legacy-407320 | jenkins | v1.31.2 | 18 Sep 23 19:08 UTC | 18 Sep 23 19:08 UTC |
	|         | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-407320                                            | ingress-addon-legacy-407320 | jenkins | v1.31.2 | 18 Sep 23 19:08 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-407320 ip                                         | ingress-addon-legacy-407320 | jenkins | v1.31.2 | 18 Sep 23 19:10 UTC | 18 Sep 23 19:10 UTC |
	| addons  | ingress-addon-legacy-407320                                            | ingress-addon-legacy-407320 | jenkins | v1.31.2 | 18 Sep 23 19:11 UTC | 18 Sep 23 19:11 UTC |
	|         | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-407320                                            | ingress-addon-legacy-407320 | jenkins | v1.31.2 | 18 Sep 23 19:11 UTC | 18 Sep 23 19:11 UTC |
	|         | addons disable ingress                                                 |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|---------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/18 19:06:22
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 19:06:22.889724  675721 out.go:296] Setting OutFile to fd 1 ...
	I0918 19:06:22.889982  675721 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 19:06:22.889993  675721 out.go:309] Setting ErrFile to fd 2...
	I0918 19:06:22.889999  675721 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 19:06:22.890269  675721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17263-642665/.minikube/bin
	I0918 19:06:22.890737  675721 out.go:303] Setting JSON to false
	I0918 19:06:22.892063  675721 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10128,"bootTime":1695053855,"procs":399,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0918 19:06:22.892151  675721 start.go:138] virtualization:  
	I0918 19:06:22.894729  675721 out.go:177] * [ingress-addon-legacy-407320] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0918 19:06:22.897831  675721 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 19:06:22.900252  675721 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:06:22.898059  675721 notify.go:220] Checking for updates...
	I0918 19:06:22.904552  675721 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 19:06:22.906591  675721 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	I0918 19:06:22.908825  675721 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 19:06:22.910946  675721 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:06:22.913098  675721 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 19:06:22.938964  675721 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0918 19:06:22.939077  675721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:06:23.036894  675721 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-09-18 19:06:23.026558522 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 19:06:23.037009  675721 docker.go:294] overlay module found
	I0918 19:06:23.039616  675721 out.go:177] * Using the docker driver based on user configuration
	I0918 19:06:23.041736  675721 start.go:298] selected driver: docker
	I0918 19:06:23.041758  675721 start.go:902] validating driver "docker" against <nil>
	I0918 19:06:23.041782  675721 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:06:23.042386  675721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:06:23.110234  675721 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-09-18 19:06:23.100980957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 19:06:23.110400  675721 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 19:06:23.110631  675721 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 19:06:23.112859  675721 out.go:177] * Using Docker driver with root privileges
	I0918 19:06:23.114995  675721 cni.go:84] Creating CNI manager for ""
	I0918 19:06:23.115015  675721 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0918 19:06:23.115026  675721 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0918 19:06:23.115042  675721 start_flags.go:321] config:
	{Name:ingress-addon-legacy-407320 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-407320 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 19:06:23.118687  675721 out.go:177] * Starting control plane node ingress-addon-legacy-407320 in cluster ingress-addon-legacy-407320
	I0918 19:06:23.120601  675721 cache.go:122] Beginning downloading kic base image for docker with crio
	I0918 19:06:23.122658  675721 out.go:177] * Pulling base image ...
	I0918 19:06:23.124862  675721 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0918 19:06:23.124945  675721 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I0918 19:06:23.142302  675721 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I0918 19:06:23.142323  675721 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I0918 19:06:23.195443  675721 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0918 19:06:23.195468  675721 cache.go:57] Caching tarball of preloaded images
	I0918 19:06:23.195639  675721 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0918 19:06:23.197966  675721 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0918 19:06:23.199796  675721 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0918 19:06:23.308989  675721 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I0918 19:06:35.140183  675721 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0918 19:06:35.140304  675721 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I0918 19:06:36.335824  675721 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0918 19:06:36.336239  675721 profile.go:148] Saving config to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/config.json ...
	I0918 19:06:36.336274  675721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/config.json: {Name:mkb67a38ee7c4776dd692a560861c596923061c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:06:36.336453  675721 cache.go:195] Successfully downloaded all kic artifacts
	I0918 19:06:36.336480  675721 start.go:365] acquiring machines lock for ingress-addon-legacy-407320: {Name:mkf20679793a97ed4e4d4eba47a467e467428630 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:06:36.336536  675721 start.go:369] acquired machines lock for "ingress-addon-legacy-407320" in 44.095µs
	I0918 19:06:36.336560  675721 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-407320 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-407320 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 19:06:36.336622  675721 start.go:125] createHost starting for "" (driver="docker")
	I0918 19:06:36.339237  675721 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0918 19:06:36.339523  675721 start.go:159] libmachine.API.Create for "ingress-addon-legacy-407320" (driver="docker")
	I0918 19:06:36.339554  675721 client.go:168] LocalClient.Create starting
	I0918 19:06:36.339636  675721 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem
	I0918 19:06:36.339675  675721 main.go:141] libmachine: Decoding PEM data...
	I0918 19:06:36.339698  675721 main.go:141] libmachine: Parsing certificate...
	I0918 19:06:36.339759  675721 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem
	I0918 19:06:36.339798  675721 main.go:141] libmachine: Decoding PEM data...
	I0918 19:06:36.339816  675721 main.go:141] libmachine: Parsing certificate...
	I0918 19:06:36.340197  675721 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-407320 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0918 19:06:36.357048  675721 cli_runner.go:211] docker network inspect ingress-addon-legacy-407320 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0918 19:06:36.357153  675721 network_create.go:281] running [docker network inspect ingress-addon-legacy-407320] to gather additional debugging logs...
	I0918 19:06:36.357195  675721 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-407320
	W0918 19:06:36.373616  675721 cli_runner.go:211] docker network inspect ingress-addon-legacy-407320 returned with exit code 1
	I0918 19:06:36.373651  675721 network_create.go:284] error running [docker network inspect ingress-addon-legacy-407320]: docker network inspect ingress-addon-legacy-407320: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-407320 not found
	I0918 19:06:36.373665  675721 network_create.go:286] output of [docker network inspect ingress-addon-legacy-407320]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-407320 not found
	
	** /stderr **
	I0918 19:06:36.373728  675721 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 19:06:36.393763  675721 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000bea710}
	I0918 19:06:36.393800  675721 network_create.go:123] attempt to create docker network ingress-addon-legacy-407320 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0918 19:06:36.393857  675721 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-407320 ingress-addon-legacy-407320
	I0918 19:06:36.473262  675721 network_create.go:107] docker network ingress-addon-legacy-407320 192.168.49.0/24 created
	I0918 19:06:36.473295  675721 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-407320" container
	I0918 19:06:36.473374  675721 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0918 19:06:36.490125  675721 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-407320 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-407320 --label created_by.minikube.sigs.k8s.io=true
	I0918 19:06:36.509025  675721 oci.go:103] Successfully created a docker volume ingress-addon-legacy-407320
	I0918 19:06:36.509120  675721 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-407320-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-407320 --entrypoint /usr/bin/test -v ingress-addon-legacy-407320:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I0918 19:06:38.040443  675721 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-407320-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-407320 --entrypoint /usr/bin/test -v ingress-addon-legacy-407320:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib: (1.53127292s)
	I0918 19:06:38.040476  675721 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-407320
	I0918 19:06:38.040496  675721 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0918 19:06:38.040516  675721 kic.go:190] Starting extracting preloaded images to volume ...
	I0918 19:06:38.040613  675721 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-407320:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I0918 19:06:42.998040  675721 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-407320:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir: (4.957378228s)
	I0918 19:06:42.998072  675721 kic.go:199] duration metric: took 4.957553 seconds to extract preloaded images to volume
	W0918 19:06:42.998217  675721 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0918 19:06:42.998327  675721 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0918 19:06:43.068590  675721 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-407320 --name ingress-addon-legacy-407320 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-407320 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-407320 --network ingress-addon-legacy-407320 --ip 192.168.49.2 --volume ingress-addon-legacy-407320:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I0918 19:06:43.415930  675721 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-407320 --format={{.State.Running}}
	I0918 19:06:43.440754  675721 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-407320 --format={{.State.Status}}
	I0918 19:06:43.471880  675721 cli_runner.go:164] Run: docker exec ingress-addon-legacy-407320 stat /var/lib/dpkg/alternatives/iptables
	I0918 19:06:43.555727  675721 oci.go:144] the created container "ingress-addon-legacy-407320" has a running status.
	I0918 19:06:43.555755  675721 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/ingress-addon-legacy-407320/id_rsa...
	I0918 19:06:43.876968  675721 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/ingress-addon-legacy-407320/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0918 19:06:43.877059  675721 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17263-642665/.minikube/machines/ingress-addon-legacy-407320/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0918 19:06:43.898219  675721 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-407320 --format={{.State.Status}}
	I0918 19:06:43.927313  675721 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0918 19:06:43.927334  675721 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-407320 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0918 19:06:44.033517  675721 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-407320 --format={{.State.Status}}
	I0918 19:06:44.054384  675721 machine.go:88] provisioning docker machine ...
	I0918 19:06:44.054413  675721 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-407320"
	I0918 19:06:44.054488  675721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-407320
	I0918 19:06:44.078020  675721 main.go:141] libmachine: Using SSH client type: native
	I0918 19:06:44.078448  675721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I0918 19:06:44.078472  675721 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-407320 && echo "ingress-addon-legacy-407320" | sudo tee /etc/hostname
	I0918 19:06:44.079047  675721 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45890->127.0.0.1:33430: read: connection reset by peer
	I0918 19:06:47.235014  675721 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-407320
	
	I0918 19:06:47.235142  675721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-407320
	I0918 19:06:47.254036  675721 main.go:141] libmachine: Using SSH client type: native
	I0918 19:06:47.254458  675721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I0918 19:06:47.254483  675721 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-407320' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-407320/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-407320' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 19:06:47.393109  675721 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 19:06:47.393137  675721 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17263-642665/.minikube CaCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17263-642665/.minikube}
	I0918 19:06:47.393160  675721 ubuntu.go:177] setting up certificates
	I0918 19:06:47.393169  675721 provision.go:83] configureAuth start
	I0918 19:06:47.393236  675721 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-407320
	I0918 19:06:47.411635  675721 provision.go:138] copyHostCerts
	I0918 19:06:47.411677  675721 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem
	I0918 19:06:47.411707  675721 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem, removing ...
	I0918 19:06:47.411718  675721 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem
	I0918 19:06:47.411919  675721 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem (1082 bytes)
	I0918 19:06:47.412010  675721 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem
	I0918 19:06:47.412032  675721 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem, removing ...
	I0918 19:06:47.412037  675721 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem
	I0918 19:06:47.412069  675721 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem (1123 bytes)
	I0918 19:06:47.412114  675721 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem
	I0918 19:06:47.412132  675721 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem, removing ...
	I0918 19:06:47.412146  675721 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem
	I0918 19:06:47.412173  675721 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem (1675 bytes)
	I0918 19:06:47.412226  675721 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-407320 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-407320]
	I0918 19:06:48.152308  675721 provision.go:172] copyRemoteCerts
	I0918 19:06:48.152396  675721 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 19:06:48.152448  675721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-407320
	I0918 19:06:48.171931  675721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/ingress-addon-legacy-407320/id_rsa Username:docker}
	I0918 19:06:48.270516  675721 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0918 19:06:48.270580  675721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 19:06:48.299322  675721 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0918 19:06:48.299383  675721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0918 19:06:48.328436  675721 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0918 19:06:48.328497  675721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 19:06:48.357209  675721 provision.go:86] duration metric: configureAuth took 964.024637ms
	I0918 19:06:48.357277  675721 ubuntu.go:193] setting minikube options for container-runtime
	I0918 19:06:48.357505  675721 config.go:182] Loaded profile config "ingress-addon-legacy-407320": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0918 19:06:48.357620  675721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-407320
	I0918 19:06:48.375405  675721 main.go:141] libmachine: Using SSH client type: native
	I0918 19:06:48.376012  675721 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33430 <nil> <nil>}
	I0918 19:06:48.376037  675721 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 19:06:48.656620  675721 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 19:06:48.656642  675721 machine.go:91] provisioned docker machine in 4.602240158s
	I0918 19:06:48.656653  675721 client.go:171] LocalClient.Create took 12.317089468s
	I0918 19:06:48.656668  675721 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-407320" took 12.317144516s
	I0918 19:06:48.656676  675721 start.go:300] post-start starting for "ingress-addon-legacy-407320" (driver="docker")
	I0918 19:06:48.656685  675721 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 19:06:48.656759  675721 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 19:06:48.656803  675721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-407320
	I0918 19:06:48.675185  675721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/ingress-addon-legacy-407320/id_rsa Username:docker}
	I0918 19:06:48.774589  675721 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 19:06:48.778775  675721 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0918 19:06:48.778814  675721 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0918 19:06:48.778826  675721 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0918 19:06:48.778833  675721 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0918 19:06:48.778847  675721 filesync.go:126] Scanning /home/jenkins/minikube-integration/17263-642665/.minikube/addons for local assets ...
	I0918 19:06:48.778915  675721 filesync.go:126] Scanning /home/jenkins/minikube-integration/17263-642665/.minikube/files for local assets ...
	I0918 19:06:48.779001  675721 filesync.go:149] local asset: /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem -> 6480032.pem in /etc/ssl/certs
	I0918 19:06:48.779014  675721 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem -> /etc/ssl/certs/6480032.pem
	I0918 19:06:48.779124  675721 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 19:06:48.789722  675721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem --> /etc/ssl/certs/6480032.pem (1708 bytes)
	I0918 19:06:48.818531  675721 start.go:303] post-start completed in 161.838756ms
	I0918 19:06:48.818910  675721 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-407320
	I0918 19:06:48.837117  675721 profile.go:148] Saving config to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/config.json ...
	I0918 19:06:48.837436  675721 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 19:06:48.837486  675721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-407320
	I0918 19:06:48.860043  675721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/ingress-addon-legacy-407320/id_rsa Username:docker}
	I0918 19:06:48.954180  675721 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0918 19:06:48.960079  675721 start.go:128] duration metric: createHost completed in 12.623439834s
	I0918 19:06:48.960104  675721 start.go:83] releasing machines lock for "ingress-addon-legacy-407320", held for 12.623555371s
	I0918 19:06:48.960186  675721 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-407320
	I0918 19:06:48.978262  675721 ssh_runner.go:195] Run: cat /version.json
	I0918 19:06:48.978278  675721 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 19:06:48.978319  675721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-407320
	I0918 19:06:48.978337  675721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-407320
	I0918 19:06:49.000253  675721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/ingress-addon-legacy-407320/id_rsa Username:docker}
	I0918 19:06:49.009409  675721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/ingress-addon-legacy-407320/id_rsa Username:docker}
	I0918 19:06:49.096587  675721 ssh_runner.go:195] Run: systemctl --version
	I0918 19:06:49.232932  675721 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 19:06:49.380750  675721 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0918 19:06:49.386394  675721 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 19:06:49.413050  675721 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0918 19:06:49.413143  675721 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 19:06:49.454603  675721 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0918 19:06:49.454629  675721 start.go:469] detecting cgroup driver to use...
	I0918 19:06:49.454663  675721 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0918 19:06:49.454714  675721 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 19:06:49.473285  675721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 19:06:49.487170  675721 docker.go:196] disabling cri-docker service (if available) ...
	I0918 19:06:49.487235  675721 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 19:06:49.503378  675721 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 19:06:49.520785  675721 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 19:06:49.622166  675721 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 19:06:49.731105  675721 docker.go:212] disabling docker service ...
	I0918 19:06:49.731211  675721 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 19:06:49.753039  675721 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 19:06:49.766993  675721 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 19:06:49.869492  675721 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 19:06:49.972800  675721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 19:06:49.987176  675721 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 19:06:50.011194  675721 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0918 19:06:50.011299  675721 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:06:50.030573  675721 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 19:06:50.030673  675721 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:06:50.044332  675721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:06:50.057873  675721 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:06:50.070693  675721 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 19:06:50.082900  675721 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 19:06:50.094780  675721 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 19:06:50.108472  675721 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:06:50.210062  675721 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 19:06:50.335416  675721 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 19:06:50.335485  675721 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 19:06:50.340399  675721 start.go:537] Will wait 60s for crictl version
	I0918 19:06:50.340518  675721 ssh_runner.go:195] Run: which crictl
	I0918 19:06:50.345111  675721 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 19:06:50.388776  675721 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0918 19:06:50.388898  675721 ssh_runner.go:195] Run: crio --version
	I0918 19:06:50.435531  675721 ssh_runner.go:195] Run: crio --version
	I0918 19:06:50.495373  675721 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0918 19:06:50.497804  675721 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-407320 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 19:06:50.515933  675721 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0918 19:06:50.520538  675721 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 19:06:50.534246  675721 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0918 19:06:50.534318  675721 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 19:06:50.586968  675721 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0918 19:06:50.587046  675721 ssh_runner.go:195] Run: which lz4
	I0918 19:06:50.591688  675721 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0918 19:06:50.591805  675721 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0918 19:06:50.596257  675721 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0918 19:06:50.596310  675721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I0918 19:06:52.793075  675721 crio.go:444] Took 2.201321 seconds to copy over tarball
	I0918 19:06:52.793158  675721 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0918 19:06:55.545676  675721 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.752490667s)
	I0918 19:06:55.545700  675721 crio.go:451] Took 2.752597 seconds to extract the tarball
	I0918 19:06:55.545709  675721 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0918 19:06:55.807914  675721 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 19:06:55.851365  675721 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0918 19:06:55.851391  675721 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0918 19:06:55.851469  675721 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0918 19:06:55.851533  675721 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0918 19:06:55.851736  675721 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0918 19:06:55.851738  675721 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0918 19:06:55.851910  675721 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0918 19:06:55.851933  675721 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0918 19:06:55.851497  675721 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0918 19:06:55.851483  675721 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 19:06:55.852946  675721 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0918 19:06:55.853509  675721 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0918 19:06:55.853566  675721 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0918 19:06:55.853632  675721 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0918 19:06:55.853670  675721 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0918 19:06:55.853726  675721 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0918 19:06:55.853882  675721 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0918 19:06:55.853923  675721 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 19:06:56.289302  675721 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W0918 19:06:56.292895  675721 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0918 19:06:56.293203  675721 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W0918 19:06:56.307622  675721 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0918 19:06:56.308014  675721 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W0918 19:06:56.322770  675721 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0918 19:06:56.323036  675721 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	W0918 19:06:56.348604  675721 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0918 19:06:56.349147  675721 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W0918 19:06:56.354112  675721 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0918 19:06:56.354347  675721 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	W0918 19:06:56.354684  675721 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0918 19:06:56.354936  675721 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0918 19:06:56.358083  675721 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0918 19:06:56.358179  675721 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0918 19:06:56.358252  675721 ssh_runner.go:195] Run: which crictl
	I0918 19:06:56.393340  675721 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0918 19:06:56.393421  675721 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0918 19:06:56.393496  675721 ssh_runner.go:195] Run: which crictl
	I0918 19:06:56.493813  675721 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0918 19:06:56.493867  675721 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0918 19:06:56.493927  675721 ssh_runner.go:195] Run: which crictl
	I0918 19:06:56.497352  675721 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0918 19:06:56.497411  675721 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0918 19:06:56.497463  675721 ssh_runner.go:195] Run: which crictl
	W0918 19:06:56.498212  675721 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0918 19:06:56.498419  675721 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 19:06:56.537223  675721 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0918 19:06:56.537281  675721 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0918 19:06:56.537372  675721 ssh_runner.go:195] Run: which crictl
	I0918 19:06:56.541435  675721 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0918 19:06:56.541503  675721 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0918 19:06:56.541577  675721 ssh_runner.go:195] Run: which crictl
	I0918 19:06:56.541643  675721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0918 19:06:56.541578  675721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0918 19:06:56.541449  675721 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0918 19:06:56.541775  675721 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0918 19:06:56.541818  675721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0918 19:06:56.541825  675721 ssh_runner.go:195] Run: which crictl
	I0918 19:06:56.541745  675721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0918 19:06:56.737736  675721 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0918 19:06:56.737798  675721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0918 19:06:56.737842  675721 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 19:06:56.737912  675721 ssh_runner.go:195] Run: which crictl
	I0918 19:06:56.737962  675721 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0918 19:06:56.738036  675721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0918 19:06:56.738078  675721 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0918 19:06:56.738152  675721 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0918 19:06:56.738193  675721 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0918 19:06:56.738250  675721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0918 19:06:56.828031  675721 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0918 19:06:56.828146  675721 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0918 19:06:56.828199  675721 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 19:06:56.828265  675721 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I0918 19:06:56.893142  675721 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0918 19:06:56.893237  675721 cache_images.go:92] LoadImages completed in 1.041831674s
	W0918 19:06:56.893320  675721 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I0918 19:06:56.893395  675721 ssh_runner.go:195] Run: crio config
	I0918 19:06:56.950849  675721 cni.go:84] Creating CNI manager for ""
	I0918 19:06:56.950873  675721 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0918 19:06:56.950920  675721 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0918 19:06:56.950945  675721 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-407320 NodeName:ingress-addon-legacy-407320 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0918 19:06:56.951128  675721 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-407320"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 19:06:56.951222  675721 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-407320 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-407320 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0918 19:06:56.951292  675721 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0918 19:06:56.961793  675721 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 19:06:56.961868  675721 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 19:06:56.972470  675721 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0918 19:06:56.993555  675721 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0918 19:06:57.017450  675721 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0918 19:06:57.041710  675721 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0918 19:06:57.046619  675721 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 19:06:57.062157  675721 certs.go:56] Setting up /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320 for IP: 192.168.49.2
	I0918 19:06:57.062189  675721 certs.go:190] acquiring lock for shared ca certs: {Name:mkb16b377708c2d983623434e9d896d9d8fd7133 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:06:57.062333  675721 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.key
	I0918 19:06:57.062385  675721 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.key
	I0918 19:06:57.062438  675721 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.key
	I0918 19:06:57.062453  675721 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt with IP's: []
	I0918 19:06:57.580770  675721 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt ...
	I0918 19:06:57.580807  675721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: {Name:mk71798e14a629b3badf99b16fe8666638a658a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:06:57.581060  675721 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.key ...
	I0918 19:06:57.581077  675721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.key: {Name:mk7d6b61d388306d59bb68aa2325e503ccabd69d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:06:57.581227  675721 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/apiserver.key.dd3b5fb2
	I0918 19:06:57.581253  675721 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0918 19:06:57.845366  675721 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/apiserver.crt.dd3b5fb2 ...
	I0918 19:06:57.845413  675721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/apiserver.crt.dd3b5fb2: {Name:mk01d752bcf12ed810e5fbc6c38a4897452c6cbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:06:57.845618  675721 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/apiserver.key.dd3b5fb2 ...
	I0918 19:06:57.845637  675721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/apiserver.key.dd3b5fb2: {Name:mk5ab2bc84d92ae8321aa3836b1c3915874d850e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:06:57.845725  675721 certs.go:337] copying /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/apiserver.crt
	I0918 19:06:57.845817  675721 certs.go:341] copying /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/apiserver.key
	I0918 19:06:57.845883  675721 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/proxy-client.key
	I0918 19:06:57.845898  675721 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/proxy-client.crt with IP's: []
	I0918 19:06:58.172145  675721 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/proxy-client.crt ...
	I0918 19:06:58.172185  675721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/proxy-client.crt: {Name:mk9ed76491172e9df6b4a9b5fa25e8005fb420e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:06:58.172428  675721 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/proxy-client.key ...
	I0918 19:06:58.172442  675721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/proxy-client.key: {Name:mkb8d6fb153bb40ea4bfe1f095be681757985f6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:06:58.172543  675721 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0918 19:06:58.172570  675721 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0918 19:06:58.172588  675721 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0918 19:06:58.172604  675721 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0918 19:06:58.172621  675721 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0918 19:06:58.172645  675721 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0918 19:06:58.172663  675721 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0918 19:06:58.172689  675721 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0918 19:06:58.172761  675721 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/648003.pem (1338 bytes)
	W0918 19:06:58.172809  675721 certs.go:433] ignoring /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/648003_empty.pem, impossibly tiny 0 bytes
	I0918 19:06:58.172823  675721 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 19:06:58.172861  675721 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem (1082 bytes)
	I0918 19:06:58.172897  675721 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem (1123 bytes)
	I0918 19:06:58.172935  675721 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem (1675 bytes)
	I0918 19:06:58.173013  675721 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem (1708 bytes)
	I0918 19:06:58.173046  675721 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/648003.pem -> /usr/share/ca-certificates/648003.pem
	I0918 19:06:58.173068  675721 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem -> /usr/share/ca-certificates/6480032.pem
	I0918 19:06:58.173083  675721 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:06:58.173753  675721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0918 19:06:58.205503  675721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 19:06:58.237851  675721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 19:06:58.267310  675721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0918 19:06:58.299396  675721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 19:06:58.329982  675721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 19:06:58.359892  675721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 19:06:58.389758  675721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0918 19:06:58.419086  675721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/certs/648003.pem --> /usr/share/ca-certificates/648003.pem (1338 bytes)
	I0918 19:06:58.453032  675721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem --> /usr/share/ca-certificates/6480032.pem (1708 bytes)
	I0918 19:06:58.482567  675721 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 19:06:58.513621  675721 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 19:06:58.537003  675721 ssh_runner.go:195] Run: openssl version
	I0918 19:06:58.544385  675721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6480032.pem && ln -fs /usr/share/ca-certificates/6480032.pem /etc/ssl/certs/6480032.pem"
	I0918 19:06:58.557049  675721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6480032.pem
	I0918 19:06:58.561907  675721 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:02 /usr/share/ca-certificates/6480032.pem
	I0918 19:06:58.561977  675721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6480032.pem
	I0918 19:06:58.570817  675721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6480032.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 19:06:58.582702  675721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 19:06:58.594903  675721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:06:58.599763  675721 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 18 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:06:58.599854  675721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:06:58.608367  675721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 19:06:58.620148  675721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/648003.pem && ln -fs /usr/share/ca-certificates/648003.pem /etc/ssl/certs/648003.pem"
	I0918 19:06:58.631918  675721 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/648003.pem
	I0918 19:06:58.636704  675721 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:02 /usr/share/ca-certificates/648003.pem
	I0918 19:06:58.636813  675721 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/648003.pem
	I0918 19:06:58.645554  675721 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/648003.pem /etc/ssl/certs/51391683.0"
	I0918 19:06:58.657238  675721 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0918 19:06:58.661944  675721 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0918 19:06:58.662022  675721 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-407320 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-407320 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 19:06:58.662108  675721 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 19:06:58.662176  675721 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 19:06:58.705780  675721 cri.go:89] found id: ""
	I0918 19:06:58.705853  675721 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 19:06:58.716771  675721 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 19:06:58.727749  675721 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0918 19:06:58.727859  675721 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 19:06:58.738494  675721 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 19:06:58.738540  675721 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0918 19:06:58.794609  675721 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0918 19:06:58.794905  675721 kubeadm.go:322] [preflight] Running pre-flight checks
	I0918 19:06:58.854514  675721 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0918 19:06:58.854582  675721 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1044-aws
	I0918 19:06:58.854621  675721 kubeadm.go:322] OS: Linux
	I0918 19:06:58.854675  675721 kubeadm.go:322] CGROUPS_CPU: enabled
	I0918 19:06:58.854733  675721 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0918 19:06:58.854780  675721 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0918 19:06:58.854839  675721 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0918 19:06:58.854888  675721 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0918 19:06:58.854941  675721 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0918 19:06:58.949108  675721 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 19:06:58.949261  675721 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 19:06:58.949411  675721 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 19:06:59.195529  675721 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 19:06:59.197264  675721 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 19:06:59.197368  675721 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0918 19:06:59.300258  675721 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 19:06:59.303460  675721 out.go:204]   - Generating certificates and keys ...
	I0918 19:06:59.303575  675721 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0918 19:06:59.303648  675721 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0918 19:07:00.161439  675721 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 19:07:00.433755  675721 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0918 19:07:00.868610  675721 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0918 19:07:01.132314  675721 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0918 19:07:01.892619  675721 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0918 19:07:01.893590  675721 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-407320 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0918 19:07:02.084180  675721 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0918 19:07:02.084654  675721 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-407320 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0918 19:07:02.762197  675721 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 19:07:02.976533  675721 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 19:07:03.137974  675721 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0918 19:07:03.138455  675721 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 19:07:03.998713  675721 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 19:07:04.554507  675721 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 19:07:05.253697  675721 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 19:07:05.729593  675721 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 19:07:05.730358  675721 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 19:07:05.732640  675721 out.go:204]   - Booting up control plane ...
	I0918 19:07:05.732772  675721 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 19:07:05.738436  675721 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 19:07:05.740710  675721 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 19:07:05.742469  675721 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 19:07:05.746275  675721 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 19:07:17.750391  675721 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.003718 seconds
	I0918 19:07:17.750520  675721 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 19:07:17.764379  675721 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 19:07:18.283702  675721 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 19:07:18.283864  675721 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-407320 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0918 19:07:18.792606  675721 kubeadm.go:322] [bootstrap-token] Using token: nhnd83.stusc6gfohes4p12
	I0918 19:07:18.795061  675721 out.go:204]   - Configuring RBAC rules ...
	I0918 19:07:18.795179  675721 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 19:07:18.800327  675721 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 19:07:18.816779  675721 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 19:07:18.819434  675721 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 19:07:18.822999  675721 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 19:07:18.826588  675721 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 19:07:18.838900  675721 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 19:07:19.106964  675721 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0918 19:07:19.234110  675721 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0918 19:07:19.235659  675721 kubeadm.go:322] 
	I0918 19:07:19.235729  675721 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0918 19:07:19.235742  675721 kubeadm.go:322] 
	I0918 19:07:19.235846  675721 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0918 19:07:19.235856  675721 kubeadm.go:322] 
	I0918 19:07:19.235884  675721 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0918 19:07:19.235944  675721 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 19:07:19.235996  675721 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 19:07:19.236006  675721 kubeadm.go:322] 
	I0918 19:07:19.236055  675721 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0918 19:07:19.236131  675721 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 19:07:19.236199  675721 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 19:07:19.236212  675721 kubeadm.go:322] 
	I0918 19:07:19.236291  675721 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 19:07:19.236366  675721 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0918 19:07:19.236374  675721 kubeadm.go:322] 
	I0918 19:07:19.236452  675721 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nhnd83.stusc6gfohes4p12 \
	I0918 19:07:19.236554  675721 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:1471e1bb7c66f1f1f8363746a1e5f2ae35a8554d6ad2342a0b3973b70608e7c8 \
	I0918 19:07:19.236579  675721 kubeadm.go:322]     --control-plane 
	I0918 19:07:19.236587  675721 kubeadm.go:322] 
	I0918 19:07:19.236666  675721 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0918 19:07:19.236674  675721 kubeadm.go:322] 
	I0918 19:07:19.236751  675721 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nhnd83.stusc6gfohes4p12 \
	I0918 19:07:19.236854  675721 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:1471e1bb7c66f1f1f8363746a1e5f2ae35a8554d6ad2342a0b3973b70608e7c8 
	I0918 19:07:19.240159  675721 kubeadm.go:322] W0918 19:06:58.793761    1233 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0918 19:07:19.240416  675721 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-aws\n", err: exit status 1
	I0918 19:07:19.240539  675721 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 19:07:19.240660  675721 kubeadm.go:322] W0918 19:07:05.738300    1233 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0918 19:07:19.240776  675721 kubeadm.go:322] W0918 19:07:05.740606    1233 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0918 19:07:19.240796  675721 cni.go:84] Creating CNI manager for ""
	I0918 19:07:19.240805  675721 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0918 19:07:19.243571  675721 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0918 19:07:19.246592  675721 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0918 19:07:19.251870  675721 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0918 19:07:19.251893  675721 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0918 19:07:19.274551  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0918 19:07:19.713731  675721 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 19:07:19.713857  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:19.713933  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36 minikube.k8s.io/name=ingress-addon-legacy-407320 minikube.k8s.io/updated_at=2023_09_18T19_07_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:19.847075  675721 ops.go:34] apiserver oom_adj: -16
	I0918 19:07:19.852223  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:19.946877  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:20.549356  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:21.048808  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:21.548838  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:22.049327  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:22.548750  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:23.049337  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:23.549774  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:24.049743  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:24.549559  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:25.048897  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:25.548743  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:26.049527  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:26.549157  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:27.048814  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:27.548718  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:28.049527  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:28.549646  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:29.049105  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:29.549726  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:30.048842  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:30.548798  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:31.049069  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:31.549232  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:32.049317  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:32.549095  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:33.048763  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:33.549717  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:34.048746  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:34.549005  675721 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:07:34.704962  675721 kubeadm.go:1081] duration metric: took 14.991150147s to wait for elevateKubeSystemPrivileges.
	I0918 19:07:34.704996  675721 kubeadm.go:406] StartCluster complete in 36.042980167s
	I0918 19:07:34.705013  675721 settings.go:142] acquiring lock: {Name:mk1cee0139b5f0ae29a168e7793f3f69abc95f11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:07:34.705076  675721 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 19:07:34.705846  675721 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/kubeconfig: {Name:mkbc55d6d811840d4d5667f8f39c79585e0314ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:07:34.706589  675721 kapi.go:59] client config for ingress-addon-legacy-407320: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt", KeyFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.key", CAFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1697f50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 19:07:34.708134  675721 config.go:182] Loaded profile config "ingress-addon-legacy-407320": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0918 19:07:34.708201  675721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 19:07:34.708291  675721 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0918 19:07:34.708359  675721 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-407320"
	I0918 19:07:34.708377  675721 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-407320"
	I0918 19:07:34.708413  675721 host.go:66] Checking if "ingress-addon-legacy-407320" exists ...
	I0918 19:07:34.708886  675721 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-407320 --format={{.State.Status}}
	I0918 19:07:34.709220  675721 cert_rotation.go:137] Starting client certificate rotation controller
	I0918 19:07:34.709758  675721 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-407320"
	I0918 19:07:34.709779  675721 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-407320"
	I0918 19:07:34.710072  675721 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-407320 --format={{.State.Status}}
	I0918 19:07:34.770057  675721 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-407320" context rescaled to 1 replicas
	I0918 19:07:34.770101  675721 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 19:07:34.775147  675721 out.go:177] * Verifying Kubernetes components...
	I0918 19:07:34.770966  675721 kapi.go:59] client config for ingress-addon-legacy-407320: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt", KeyFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.key", CAFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1697f50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 19:07:34.779921  675721 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 19:07:34.777833  675721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 19:07:34.781047  675721 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-407320"
	I0918 19:07:34.782252  675721 host.go:66] Checking if "ingress-addon-legacy-407320" exists ...
	I0918 19:07:34.782406  675721 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:07:34.782440  675721 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 19:07:34.782527  675721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-407320
	I0918 19:07:34.782744  675721 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-407320 --format={{.State.Status}}
	I0918 19:07:34.819245  675721 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 19:07:34.819266  675721 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 19:07:34.819331  675721 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-407320
	I0918 19:07:34.849153  675721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/ingress-addon-legacy-407320/id_rsa Username:docker}
	I0918 19:07:34.863331  675721 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/ingress-addon-legacy-407320/id_rsa Username:docker}
	I0918 19:07:34.980863  675721 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 19:07:34.981549  675721 kapi.go:59] client config for ingress-addon-legacy-407320: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt", KeyFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.key", CAFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]ui
nt8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1697f50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 19:07:34.981891  675721 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-407320" to be "Ready" ...
	I0918 19:07:35.068344  675721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:07:35.102825  675721 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 19:07:35.477572  675721 start.go:917] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0918 19:07:35.696251  675721 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0918 19:07:35.698623  675721 addons.go:502] enable addons completed in 990.319795ms: enabled=[storage-provisioner default-storageclass]
	I0918 19:07:37.057016  675721 node_ready.go:58] node "ingress-addon-legacy-407320" has status "Ready":"False"
	I0918 19:07:39.552632  675721 node_ready.go:58] node "ingress-addon-legacy-407320" has status "Ready":"False"
	I0918 19:07:42.052573  675721 node_ready.go:58] node "ingress-addon-legacy-407320" has status "Ready":"False"
	I0918 19:07:44.552061  675721 node_ready.go:58] node "ingress-addon-legacy-407320" has status "Ready":"False"
	I0918 19:07:47.051795  675721 node_ready.go:58] node "ingress-addon-legacy-407320" has status "Ready":"False"
	I0918 19:07:49.052372  675721 node_ready.go:58] node "ingress-addon-legacy-407320" has status "Ready":"False"
	I0918 19:07:51.551880  675721 node_ready.go:58] node "ingress-addon-legacy-407320" has status "Ready":"False"
	I0918 19:07:53.052590  675721 node_ready.go:49] node "ingress-addon-legacy-407320" has status "Ready":"True"
	I0918 19:07:53.052621  675721 node_ready.go:38] duration metric: took 18.070709295s waiting for node "ingress-addon-legacy-407320" to be "Ready" ...
	I0918 19:07:53.052632  675721 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 19:07:53.060556  675721 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-nnspx" in "kube-system" namespace to be "Ready" ...
	I0918 19:07:55.069793  675721 pod_ready.go:102] pod "coredns-66bff467f8-nnspx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-09-18 19:07:35 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0918 19:07:57.069957  675721 pod_ready.go:102] pod "coredns-66bff467f8-nnspx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-09-18 19:07:35 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0918 19:07:59.072385  675721 pod_ready.go:102] pod "coredns-66bff467f8-nnspx" in "kube-system" namespace has status "Ready":"False"
	I0918 19:08:01.571188  675721 pod_ready.go:102] pod "coredns-66bff467f8-nnspx" in "kube-system" namespace has status "Ready":"False"
	I0918 19:08:03.571448  675721 pod_ready.go:102] pod "coredns-66bff467f8-nnspx" in "kube-system" namespace has status "Ready":"False"
	I0918 19:08:04.572470  675721 pod_ready.go:92] pod "coredns-66bff467f8-nnspx" in "kube-system" namespace has status "Ready":"True"
	I0918 19:08:04.572499  675721 pod_ready.go:81] duration metric: took 11.511907111s waiting for pod "coredns-66bff467f8-nnspx" in "kube-system" namespace to be "Ready" ...
	I0918 19:08:04.572512  675721 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-407320" in "kube-system" namespace to be "Ready" ...
	I0918 19:08:04.577682  675721 pod_ready.go:92] pod "etcd-ingress-addon-legacy-407320" in "kube-system" namespace has status "Ready":"True"
	I0918 19:08:04.577710  675721 pod_ready.go:81] duration metric: took 5.190511ms waiting for pod "etcd-ingress-addon-legacy-407320" in "kube-system" namespace to be "Ready" ...
	I0918 19:08:04.577725  675721 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-407320" in "kube-system" namespace to be "Ready" ...
	I0918 19:08:04.582849  675721 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-407320" in "kube-system" namespace has status "Ready":"True"
	I0918 19:08:04.582876  675721 pod_ready.go:81] duration metric: took 5.14233ms waiting for pod "kube-apiserver-ingress-addon-legacy-407320" in "kube-system" namespace to be "Ready" ...
	I0918 19:08:04.582887  675721 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-407320" in "kube-system" namespace to be "Ready" ...
	I0918 19:08:04.587940  675721 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-407320" in "kube-system" namespace has status "Ready":"True"
	I0918 19:08:04.587965  675721 pod_ready.go:81] duration metric: took 5.070421ms waiting for pod "kube-controller-manager-ingress-addon-legacy-407320" in "kube-system" namespace to be "Ready" ...
	I0918 19:08:04.587979  675721 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zqrwk" in "kube-system" namespace to be "Ready" ...
	I0918 19:08:04.593153  675721 pod_ready.go:92] pod "kube-proxy-zqrwk" in "kube-system" namespace has status "Ready":"True"
	I0918 19:08:04.593180  675721 pod_ready.go:81] duration metric: took 5.192784ms waiting for pod "kube-proxy-zqrwk" in "kube-system" namespace to be "Ready" ...
	I0918 19:08:04.593198  675721 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-407320" in "kube-system" namespace to be "Ready" ...
	I0918 19:08:04.767231  675721 request.go:629] Waited for 173.969771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-407320
	I0918 19:08:04.967624  675721 request.go:629] Waited for 197.376262ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-407320
	I0918 19:08:04.970352  675721 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-407320" in "kube-system" namespace has status "Ready":"True"
	I0918 19:08:04.970376  675721 pod_ready.go:81] duration metric: took 377.169877ms waiting for pod "kube-scheduler-ingress-addon-legacy-407320" in "kube-system" namespace to be "Ready" ...
	I0918 19:08:04.970389  675721 pod_ready.go:38] duration metric: took 11.917742233s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 19:08:04.970433  675721 api_server.go:52] waiting for apiserver process to appear ...
	I0918 19:08:04.970514  675721 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:08:04.985185  675721 api_server.go:72] duration metric: took 30.215050775s to wait for apiserver process to appear ...
	I0918 19:08:04.985250  675721 api_server.go:88] waiting for apiserver healthz status ...
	I0918 19:08:04.985281  675721 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0918 19:08:04.998516  675721 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0918 19:08:04.999413  675721 api_server.go:141] control plane version: v1.18.20
	I0918 19:08:04.999446  675721 api_server.go:131] duration metric: took 14.17032ms to wait for apiserver health ...
	I0918 19:08:04.999455  675721 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 19:08:05.167916  675721 request.go:629] Waited for 168.368041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0918 19:08:05.174564  675721 system_pods.go:59] 8 kube-system pods found
	I0918 19:08:05.174616  675721 system_pods.go:61] "coredns-66bff467f8-nnspx" [3a5fa058-5258-414a-9181-4bac4e0eeb40] Running
	I0918 19:08:05.174624  675721 system_pods.go:61] "etcd-ingress-addon-legacy-407320" [22c5a8a7-0138-42c3-8e7d-2641ed9db425] Running
	I0918 19:08:05.174630  675721 system_pods.go:61] "kindnet-wft9r" [48ad2c77-70b7-4c43-9eb6-6285b73d03d3] Running
	I0918 19:08:05.174642  675721 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-407320" [8a45f4c9-20a2-4e5b-b246-32b04d666b5e] Running
	I0918 19:08:05.174673  675721 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-407320" [35191490-8a81-4fd7-86d2-f745a1448e04] Running
	I0918 19:08:05.174693  675721 system_pods.go:61] "kube-proxy-zqrwk" [53126a38-f5d2-493c-91e1-bdeddca14a7d] Running
	I0918 19:08:05.174700  675721 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-407320" [2fcb738e-0d80-4a6d-ba1b-9211a37bff70] Running
	I0918 19:08:05.174706  675721 system_pods.go:61] "storage-provisioner" [a17662a2-bd67-4184-88fd-928e99f90efe] Running
	I0918 19:08:05.174712  675721 system_pods.go:74] duration metric: took 175.250779ms to wait for pod list to return data ...
	I0918 19:08:05.174725  675721 default_sa.go:34] waiting for default service account to be created ...
	I0918 19:08:05.368149  675721 request.go:629] Waited for 193.340531ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0918 19:08:05.370804  675721 default_sa.go:45] found service account: "default"
	I0918 19:08:05.370833  675721 default_sa.go:55] duration metric: took 196.101482ms for default service account to be created ...
	I0918 19:08:05.370844  675721 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 19:08:05.568185  675721 request.go:629] Waited for 197.276537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0918 19:08:05.573875  675721 system_pods.go:86] 8 kube-system pods found
	I0918 19:08:05.573908  675721 system_pods.go:89] "coredns-66bff467f8-nnspx" [3a5fa058-5258-414a-9181-4bac4e0eeb40] Running
	I0918 19:08:05.573917  675721 system_pods.go:89] "etcd-ingress-addon-legacy-407320" [22c5a8a7-0138-42c3-8e7d-2641ed9db425] Running
	I0918 19:08:05.573923  675721 system_pods.go:89] "kindnet-wft9r" [48ad2c77-70b7-4c43-9eb6-6285b73d03d3] Running
	I0918 19:08:05.573928  675721 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-407320" [8a45f4c9-20a2-4e5b-b246-32b04d666b5e] Running
	I0918 19:08:05.573934  675721 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-407320" [35191490-8a81-4fd7-86d2-f745a1448e04] Running
	I0918 19:08:05.573939  675721 system_pods.go:89] "kube-proxy-zqrwk" [53126a38-f5d2-493c-91e1-bdeddca14a7d] Running
	I0918 19:08:05.573943  675721 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-407320" [2fcb738e-0d80-4a6d-ba1b-9211a37bff70] Running
	I0918 19:08:05.573948  675721 system_pods.go:89] "storage-provisioner" [a17662a2-bd67-4184-88fd-928e99f90efe] Running
	I0918 19:08:05.573954  675721 system_pods.go:126] duration metric: took 203.105935ms to wait for k8s-apps to be running ...
	I0918 19:08:05.573967  675721 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 19:08:05.574034  675721 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 19:08:05.588050  675721 system_svc.go:56] duration metric: took 14.070487ms WaitForService to wait for kubelet.
	I0918 19:08:05.588088  675721 kubeadm.go:581] duration metric: took 30.817961432s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0918 19:08:05.588113  675721 node_conditions.go:102] verifying NodePressure condition ...
	I0918 19:08:05.767569  675721 request.go:629] Waited for 179.381969ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0918 19:08:05.770803  675721 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0918 19:08:05.770838  675721 node_conditions.go:123] node cpu capacity is 2
	I0918 19:08:05.770851  675721 node_conditions.go:105] duration metric: took 182.731601ms to run NodePressure ...
	I0918 19:08:05.770864  675721 start.go:228] waiting for startup goroutines ...
	I0918 19:08:05.770871  675721 start.go:233] waiting for cluster config update ...
	I0918 19:08:05.770881  675721 start.go:242] writing updated cluster config ...
	I0918 19:08:05.771179  675721 ssh_runner.go:195] Run: rm -f paused
	I0918 19:08:05.836712  675721 start.go:600] kubectl: 1.28.2, cluster: 1.18.20 (minor skew: 10)
	I0918 19:08:05.839143  675721 out.go:177] 
	W0918 19:08:05.841456  675721 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0918 19:08:05.843822  675721 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0918 19:08:05.846001  675721 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-407320" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 18 19:11:11 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:11.633990160Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=44b9fe47-c76b-4c0e-a24e-81e2fcee766c name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 18 19:11:11 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:11.634155182Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a39a074194753e46f21cfbf0b4253444939f276ed100d23d5fc568ada19a9ebb,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb],Size_:28999826,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=44b9fe47-c76b-4c0e-a24e-81e2fcee766c name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 18 19:11:11 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:11.634928131Z" level=info msg="Creating container: default/hello-world-app-5f5d8b66bb-xwblq/hello-world-app" id=e5dd034c-a499-43d6-9134-10a2e697dcdf name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Sep 18 19:11:11 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:11.635023811Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 18 19:11:11 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:11.728613982Z" level=info msg="Created container 55b10022d5f878c509f6292309923b13ee0c3921411c0ce09ea4982a2a10e38a: default/hello-world-app-5f5d8b66bb-xwblq/hello-world-app" id=e5dd034c-a499-43d6-9134-10a2e697dcdf name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Sep 18 19:11:11 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:11.729144969Z" level=info msg="Starting container: 55b10022d5f878c509f6292309923b13ee0c3921411c0ce09ea4982a2a10e38a" id=91085717-55f8-467f-abb9-a7a1806fd92e name=/runtime.v1alpha2.RuntimeService/StartContainer
	Sep 18 19:11:11 ingress-addon-legacy-407320 conmon[3693]: conmon 55b10022d5f878c509f6 <ninfo>: container 3704 exited with status 1
	Sep 18 19:11:11 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:11.750981406Z" level=info msg="Started container" PID=3704 containerID=55b10022d5f878c509f6292309923b13ee0c3921411c0ce09ea4982a2a10e38a description=default/hello-world-app-5f5d8b66bb-xwblq/hello-world-app id=91085717-55f8-467f-abb9-a7a1806fd92e name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=bba1f8cd574feeb5403cc4653187dc9f37a634d6c064dedb22b899f5e1ebb73b
	Sep 18 19:11:12 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:12.294512052Z" level=warning msg="Stopping container c8e0db7d0939321e40f33db3f6cb5cd8f714f0f1a44f4ebdfa1d127ee8fc6ccf with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=5a9d8cd6-38bc-425d-b97a-ce4146ce20d7 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 18 19:11:12 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:12.356185296Z" level=info msg="Removing container: 51c015836dc9793d931678bef78c2069853e02221ae55abb8492a53e642a6430" id=80f7c6d3-6b22-4ae7-b9ae-eff206ed7805 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Sep 18 19:11:12 ingress-addon-legacy-407320 conmon[2715]: conmon c8e0db7d0939321e40f3 <ninfo>: container 2728 exited with status 137
	Sep 18 19:11:12 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:12.386719920Z" level=info msg="Removed container 51c015836dc9793d931678bef78c2069853e02221ae55abb8492a53e642a6430: default/hello-world-app-5f5d8b66bb-xwblq/hello-world-app" id=80f7c6d3-6b22-4ae7-b9ae-eff206ed7805 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
	Sep 18 19:11:12 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:12.493269831Z" level=info msg="Stopped container c8e0db7d0939321e40f33db3f6cb5cd8f714f0f1a44f4ebdfa1d127ee8fc6ccf: ingress-nginx/ingress-nginx-controller-7fcf777cb7-nphtg/controller" id=7ca9dee7-a2c1-43d8-9f08-59b9030a3b5b name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 18 19:11:12 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:12.493868486Z" level=info msg="Stopping pod sandbox: c676bc9da468b7e526f09deaeda70bcd5f32aa63c53a12d2b74bd668c5b42e35" id=33e68890-96fd-4be2-9fef-5a75ed5f4559 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 18 19:11:12 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:12.497508024Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-NTNWLHAZVLP6CIZD - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-NWX62PIR4IYRQIP2 - [0:0]\n-X KUBE-HP-NTNWLHAZVLP6CIZD\n-X KUBE-HP-NWX62PIR4IYRQIP2\nCOMMIT\n"
	Sep 18 19:11:12 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:12.498246782Z" level=info msg="Stopped container c8e0db7d0939321e40f33db3f6cb5cd8f714f0f1a44f4ebdfa1d127ee8fc6ccf: ingress-nginx/ingress-nginx-controller-7fcf777cb7-nphtg/controller" id=5a9d8cd6-38bc-425d-b97a-ce4146ce20d7 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 18 19:11:12 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:12.498673514Z" level=info msg="Stopping pod sandbox: c676bc9da468b7e526f09deaeda70bcd5f32aa63c53a12d2b74bd668c5b42e35" id=4dd75d53-29b7-41d6-bb01-44f35ea0c615 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 18 19:11:12 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:12.499309577Z" level=info msg="Closing host port tcp:80"
	Sep 18 19:11:12 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:12.499346492Z" level=info msg="Closing host port tcp:443"
	Sep 18 19:11:12 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:12.500593944Z" level=info msg="Host port tcp:80 does not have an open socket"
	Sep 18 19:11:12 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:12.500628906Z" level=info msg="Host port tcp:443 does not have an open socket"
	Sep 18 19:11:12 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:12.500773226Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-nphtg Namespace:ingress-nginx ID:c676bc9da468b7e526f09deaeda70bcd5f32aa63c53a12d2b74bd668c5b42e35 UID:7a2ed2ae-755b-414f-bd4d-2a5ab3815845 NetNS:/var/run/netns/5a068a07-e499-465a-ad7a-bdebd03eb043 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 18 19:11:12 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:12.500919466Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-nphtg from CNI network \"kindnet\" (type=ptp)"
	Sep 18 19:11:12 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:12.525431665Z" level=info msg="Stopped pod sandbox: c676bc9da468b7e526f09deaeda70bcd5f32aa63c53a12d2b74bd668c5b42e35" id=33e68890-96fd-4be2-9fef-5a75ed5f4559 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 18 19:11:12 ingress-addon-legacy-407320 crio[898]: time="2023-09-18 19:11:12.525555448Z" level=info msg="Stopped pod sandbox (already stopped): c676bc9da468b7e526f09deaeda70bcd5f32aa63c53a12d2b74bd668c5b42e35" id=4dd75d53-29b7-41d6-bb01-44f35ea0c615 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	55b10022d5f87       a39a074194753e46f21cfbf0b4253444939f276ed100d23d5fc568ada19a9ebb                                                   6 seconds ago       Exited              hello-world-app           2                   bba1f8cd574fe       hello-world-app-5f5d8b66bb-xwblq
	8b03e93539fc7       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                    2 minutes ago       Running             nginx                     0                   9677e3981eb52       nginx
	c8e0db7d09393       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   c676bc9da468b       ingress-nginx-controller-7fcf777cb7-nphtg
	92db2013fd99d       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              patch                     0                   e8fc5f6072867       ingress-nginx-admission-patch-8spl7
	09cc54d276834       docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7     3 minutes ago       Exited              create                    0                   2de560cbbb62d       ingress-nginx-admission-create-j8m9w
	6562cdde5c6fe       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                   3 minutes ago       Running             coredns                   0                   0d106c2233fab       coredns-66bff467f8-nnspx
	1b9b9fc6cb16b       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2    3 minutes ago       Running             storage-provisioner       0                   a68a278a7a1db       storage-provisioner
	15f7ae1dc3de8       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   4c8a2ec0db3da       kindnet-wft9r
	ddd311418ae53       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                   3 minutes ago       Running             kube-proxy                0                   0246deeed38cd       kube-proxy-zqrwk
	c7c957d65bce2       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                   4 minutes ago       Running             etcd                      0                   3bb1d95b78759       etcd-ingress-addon-legacy-407320
	f1a229dba34b6       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                   4 minutes ago       Running             kube-scheduler            0                   33fcebca54b55       kube-scheduler-ingress-addon-legacy-407320
	9031ec2b87b87       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                   4 minutes ago       Running             kube-apiserver            0                   957cf099afcc0       kube-apiserver-ingress-addon-legacy-407320
	47781235e71cc       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                   4 minutes ago       Running             kube-controller-manager   0                   71f77987d8c8c       kube-controller-manager-ingress-addon-legacy-407320
	
	* 
	* ==> coredns [6562cdde5c6fe3b6dac0f03d450897e9ebc3d99e214cbf60812178ade5cb9445] <==
	* [INFO] 10.244.0.5:56460 - 13888 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048591s
	[INFO] 10.244.0.5:56460 - 62658 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.006109571s
	[INFO] 10.244.0.5:51962 - 12898 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.007177191s
	[INFO] 10.244.0.5:51962 - 54351 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001768388s
	[INFO] 10.244.0.5:56460 - 34774 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001954505s
	[INFO] 10.244.0.5:56460 - 64819 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000112131s
	[INFO] 10.244.0.5:51962 - 31783 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000034027s
	[INFO] 10.244.0.5:45759 - 64670 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000100554s
	[INFO] 10.244.0.5:53587 - 11756 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000035914s
	[INFO] 10.244.0.5:53587 - 52171 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000036398s
	[INFO] 10.244.0.5:53587 - 54351 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000172456s
	[INFO] 10.244.0.5:53587 - 39215 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000140078s
	[INFO] 10.244.0.5:53587 - 41493 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000052792s
	[INFO] 10.244.0.5:45759 - 52239 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000035652s
	[INFO] 10.244.0.5:53587 - 36874 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003415s
	[INFO] 10.244.0.5:45759 - 7203 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000337945s
	[INFO] 10.244.0.5:45759 - 26131 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000043963s
	[INFO] 10.244.0.5:53587 - 60608 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001137446s
	[INFO] 10.244.0.5:45759 - 4655 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057239s
	[INFO] 10.244.0.5:45759 - 15957 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000171816s
	[INFO] 10.244.0.5:53587 - 27007 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000904559s
	[INFO] 10.244.0.5:53587 - 6238 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000235201s
	[INFO] 10.244.0.5:45759 - 29268 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.00094602s
	[INFO] 10.244.0.5:45759 - 45327 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000904551s
	[INFO] 10.244.0.5:45759 - 39749 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000069981s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-407320
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-407320
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36
	                    minikube.k8s.io/name=ingress-addon-legacy-407320
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_18T19_07_19_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Sep 2023 19:07:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-407320
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Sep 2023 19:11:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Sep 2023 19:08:52 +0000   Mon, 18 Sep 2023 19:07:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Sep 2023 19:08:52 +0000   Mon, 18 Sep 2023 19:07:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Sep 2023 19:08:52 +0000   Mon, 18 Sep 2023 19:07:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Sep 2023 19:08:52 +0000   Mon, 18 Sep 2023 19:07:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-407320
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 64563637dd9746a4804631c9fdbc6dba
	  System UUID:                195e6cf5-1837-4881-aaf4-dcc4145d93b2
	  Boot ID:                    43cd75a3-7352-4de5-a11c-da52fa8117dc
	  Kernel Version:             5.15.0-1044-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-xwblq                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 coredns-66bff467f8-nnspx                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     3m44s
	  kube-system                 etcd-ingress-addon-legacy-407320                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kindnet-wft9r                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m44s
	  kube-system                 kube-apiserver-ingress-addon-legacy-407320             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-407320    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-proxy-zqrwk                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-scheduler-ingress-addon-legacy-407320             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             120Mi (1%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  4m10s (x5 over 4m10s)  kubelet     Node ingress-addon-legacy-407320 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m10s (x5 over 4m10s)  kubelet     Node ingress-addon-legacy-407320 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m10s (x4 over 4m10s)  kubelet     Node ingress-addon-legacy-407320 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m56s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m56s                  kubelet     Node ingress-addon-legacy-407320 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s                  kubelet     Node ingress-addon-legacy-407320 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s                  kubelet     Node ingress-addon-legacy-407320 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m43s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m26s                  kubelet     Node ingress-addon-legacy-407320 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001145] FS-Cache: O-key=[8] '7670ed0000000000'
	[  +0.000769] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000958] FS-Cache: N-cookie d=000000003f524057{9p.inode} n=00000000e6e16996
	[  +0.001039] FS-Cache: N-key=[8] '7670ed0000000000'
	[  +0.009586] FS-Cache: Duplicate cookie detected
	[  +0.000770] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000960] FS-Cache: O-cookie d=000000003f524057{9p.inode} n=00000000a61dace4
	[  +0.001049] FS-Cache: O-key=[8] '7670ed0000000000'
	[  +0.000697] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001086] FS-Cache: N-cookie d=000000003f524057{9p.inode} n=00000000520d2c99
	[  +0.001040] FS-Cache: N-key=[8] '7670ed0000000000'
	[  +1.832465] FS-Cache: Duplicate cookie detected
	[  +0.000739] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001072] FS-Cache: O-cookie d=000000003f524057{9p.inode} n=00000000c48f847e
	[  +0.001054] FS-Cache: O-key=[8] '7570ed0000000000'
	[  +0.000714] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=000000003f524057{9p.inode} n=00000000d0411578
	[  +0.001048] FS-Cache: N-key=[8] '7570ed0000000000'
	[  +0.410305] FS-Cache: Duplicate cookie detected
	[  +0.000752] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.000955] FS-Cache: O-cookie d=000000003f524057{9p.inode} n=00000000cf4dd87c
	[  +0.001120] FS-Cache: O-key=[8] '7b70ed0000000000'
	[  +0.000698] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000920] FS-Cache: N-cookie d=000000003f524057{9p.inode} n=00000000ed85fc4a
	[  +0.001085] FS-Cache: N-key=[8] '7b70ed0000000000'
	
	* 
	* ==> etcd [c7c957d65bce20161f0c5e4af3066d5ff308d42b6c91329ffbd37cb63e654b7e] <==
	* raft2023/09/18 19:07:10 INFO: aec36adc501070cc became follower at term 0
	raft2023/09/18 19:07:10 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/09/18 19:07:10 INFO: aec36adc501070cc became follower at term 1
	raft2023/09/18 19:07:10 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-09-18 19:07:10.576300 W | auth: simple token is not cryptographically signed
	2023-09-18 19:07:10.579622 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-18 19:07:10.582743 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-18 19:07:10.583013 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-18 19:07:10.583291 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-09-18 19:07:10.583768 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/09/18 19:07:10 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-09-18 19:07:10.584256 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	raft2023/09/18 19:07:11 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/09/18 19:07:11 INFO: aec36adc501070cc became candidate at term 2
	raft2023/09/18 19:07:11 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/09/18 19:07:11 INFO: aec36adc501070cc became leader at term 2
	raft2023/09/18 19:07:11 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-09-18 19:07:11.826357 I | etcdserver: published {Name:ingress-addon-legacy-407320 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-09-18 19:07:11.826416 I | embed: ready to serve client requests
	2023-09-18 19:07:11.826450 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-18 19:07:11.843903 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-18 19:07:11.851900 I | etcdserver/api: enabled capabilities for version 3.4
	2023-09-18 19:07:11.857907 I | embed: ready to serve client requests
	2023-09-18 19:07:11.919379 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-18 19:07:11.924801 I | embed: serving client requests on 192.168.49.2:2379
	
	* 
	* ==> kernel <==
	*  19:11:18 up  2:53,  0 users,  load average: 0.34, 0.69, 1.38
	Linux ingress-addon-legacy-407320 5.15.0-1044-aws #49~20.04.1-Ubuntu SMP Mon Aug 21 17:10:24 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [15f7ae1dc3de863c3762e280b1fc6df0701121ca08aa38b04c55e74b2f7dd335] <==
	* I0918 19:09:18.500155       1 main.go:227] handling current node
	I0918 19:09:28.510761       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 19:09:28.510793       1 main.go:227] handling current node
	I0918 19:09:38.514873       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 19:09:38.514903       1 main.go:227] handling current node
	I0918 19:09:48.526198       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 19:09:48.526226       1 main.go:227] handling current node
	I0918 19:09:58.538047       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 19:09:58.538073       1 main.go:227] handling current node
	I0918 19:10:08.541219       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 19:10:08.541247       1 main.go:227] handling current node
	I0918 19:10:18.552554       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 19:10:18.552584       1 main.go:227] handling current node
	I0918 19:10:28.558213       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 19:10:28.558243       1 main.go:227] handling current node
	I0918 19:10:38.561789       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 19:10:38.561816       1 main.go:227] handling current node
	I0918 19:10:48.565055       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 19:10:48.565086       1 main.go:227] handling current node
	I0918 19:10:58.576643       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 19:10:58.576673       1 main.go:227] handling current node
	I0918 19:11:08.580215       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 19:11:08.580246       1 main.go:227] handling current node
	I0918 19:11:18.590993       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0918 19:11:18.591020       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [9031ec2b87b875a6c5980c25c64c5a66e4b1b8479b7380d26ba04a0a1b4b404e] <==
	* E0918 19:07:16.028691       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I0918 19:07:16.198584       1 cache.go:39] Caches are synced for autoregister controller
	I0918 19:07:16.201546       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0918 19:07:16.204940       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0918 19:07:16.211691       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0918 19:07:16.211857       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0918 19:07:16.995747       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0918 19:07:16.995790       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0918 19:07:17.002113       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0918 19:07:17.007268       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0918 19:07:17.007295       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0918 19:07:17.414714       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0918 19:07:17.458591       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0918 19:07:17.526142       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0918 19:07:17.527312       1 controller.go:609] quota admission added evaluator for: endpoints
	I0918 19:07:17.531211       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0918 19:07:18.438102       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0918 19:07:19.089699       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0918 19:07:19.216429       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0918 19:07:22.539956       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0918 19:07:34.897533       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0918 19:07:34.910633       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0918 19:08:06.739563       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0918 19:08:33.957001       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0918 19:11:10.298955       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [47781235e71cc8d51895b6d974bf2b8cef7c9eeda4aefeddb4fd5f506b569b43] <==
	* I0918 19:07:34.928255       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I0918 19:07:34.929600       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-407320", UID:"94c642c6-cc30-4c30-9dcc-48e2e2641cb8", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-407320 event: Registered Node ingress-addon-legacy-407320 in Controller
	I0918 19:07:34.956656       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"0049621d-b458-4a3c-b257-a4d2351686a1", APIVersion:"apps/v1", ResourceVersion:"329", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0918 19:07:34.982161       1 shared_informer.go:230] Caches are synced for resource quota 
	I0918 19:07:34.986430       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"82153fa0-ae6c-4dbc-8b01-0b71973ca687", APIVersion:"apps/v1", ResourceVersion:"335", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-nnspx
	I0918 19:07:35.014013       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"f79e5e79-d663-4616-b1e1-880d540985e5", APIVersion:"apps/v1", ResourceVersion:"212", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-wft9r
	I0918 19:07:35.026659       1 shared_informer.go:230] Caches are synced for disruption 
	I0918 19:07:35.026689       1 disruption.go:339] Sending events to api server.
	I0918 19:07:35.033304       1 shared_informer.go:230] Caches are synced for resource quota 
	I0918 19:07:35.065595       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"f88a49f9-b0d4-4103-b209-be6eac762b28", APIVersion:"apps/v1", ResourceVersion:"205", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-zqrwk
	I0918 19:07:35.185496       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0918 19:07:35.185951       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0918 19:07:35.186014       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	E0918 19:07:35.440471       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"f79e5e79-d663-4616-b1e1-880d540985e5", ResourceVersion:"212", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63830660839, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40018a6160), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40018a6180)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40018a61a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40018a61c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40018a61e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40018a6200), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40018a6220)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40018a6260)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400142ca00), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40011d4bf8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40000f0f50), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400160e7d0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40011d4c40)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0918 19:07:54.927581       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0918 19:08:06.726030       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"fbce0a3e-f942-416a-82ea-eb5a8bb33057", APIVersion:"apps/v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0918 19:08:06.748124       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"3e2beb95-6f44-4059-9939-bb493ce8934b", APIVersion:"apps/v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-nphtg
	I0918 19:08:06.764377       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"c3108e51-3063-4212-98df-d22a60c07d3b", APIVersion:"batch/v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-j8m9w
	I0918 19:08:06.788191       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"b129f7a8-28e4-473d-a8ad-f658c1a2918d", APIVersion:"batch/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-8spl7
	I0918 19:08:10.005416       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"c3108e51-3063-4212-98df-d22a60c07d3b", APIVersion:"batch/v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0918 19:08:10.060886       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"b129f7a8-28e4-473d-a8ad-f658c1a2918d", APIVersion:"batch/v1", ResourceVersion:"491", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0918 19:10:51.960696       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"4008a6f0-4f5d-4850-b848-68d5607e3e1f", APIVersion:"apps/v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0918 19:10:51.980941       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"ef595236-a362-44ed-af08-f6f430b99b86", APIVersion:"apps/v1", ResourceVersion:"705", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-xwblq
	E0918 19:11:14.923572       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-g8558" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [ddd311418ae532f86f51da75cae7d849e99f0f2d0cf631a204996f2b7e8e046a] <==
	* W0918 19:07:35.886400       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0918 19:07:35.899580       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0918 19:07:35.900277       1 server_others.go:186] Using iptables Proxier.
	I0918 19:07:35.900682       1 server.go:583] Version: v1.18.20
	I0918 19:07:35.903174       1 config.go:315] Starting service config controller
	I0918 19:07:35.903224       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0918 19:07:35.905863       1 config.go:133] Starting endpoints config controller
	I0918 19:07:35.905884       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0918 19:07:36.010057       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0918 19:07:36.010074       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [f1a229dba34b6b486836ca809ae338e96c4da71116f5515bb8439b1befd5de08] <==
	* W0918 19:07:16.177750       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0918 19:07:16.219484       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0918 19:07:16.219612       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0918 19:07:16.221978       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0918 19:07:16.222151       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 19:07:16.222163       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0918 19:07:16.222183       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0918 19:07:16.226997       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0918 19:07:16.227117       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 19:07:16.227195       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0918 19:07:16.227268       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 19:07:16.232023       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 19:07:16.232108       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 19:07:16.232026       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 19:07:16.232283       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 19:07:16.232316       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0918 19:07:16.232396       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0918 19:07:16.232529       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0918 19:07:16.234845       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0918 19:07:17.154742       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 19:07:17.160843       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0918 19:07:17.200071       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 19:07:17.215875       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0918 19:07:20.322346       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0918 19:07:35.181130       1 factory.go:503] pod: kube-system/coredns-66bff467f8-nnspx is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Sep 18 19:10:56 ingress-addon-legacy-407320 kubelet[1612]: I0918 19:10:56.329175    1612 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: cdfd0ecd86ab42ef901ba0301b691ab45d51a673bc8045e904ba0ff84c43ebd2
	Sep 18 19:10:56 ingress-addon-legacy-407320 kubelet[1612]: I0918 19:10:56.329396    1612 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 51c015836dc9793d931678bef78c2069853e02221ae55abb8492a53e642a6430
	Sep 18 19:10:56 ingress-addon-legacy-407320 kubelet[1612]: E0918 19:10:56.329639    1612 pod_workers.go:191] Error syncing pod 12c999eb-60ca-49f7-b147-ce52e1b48207 ("hello-world-app-5f5d8b66bb-xwblq_default(12c999eb-60ca-49f7-b147-ce52e1b48207)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-xwblq_default(12c999eb-60ca-49f7-b147-ce52e1b48207)"
	Sep 18 19:10:57 ingress-addon-legacy-407320 kubelet[1612]: I0918 19:10:57.331849    1612 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 51c015836dc9793d931678bef78c2069853e02221ae55abb8492a53e642a6430
	Sep 18 19:10:57 ingress-addon-legacy-407320 kubelet[1612]: E0918 19:10:57.332115    1612 pod_workers.go:191] Error syncing pod 12c999eb-60ca-49f7-b147-ce52e1b48207 ("hello-world-app-5f5d8b66bb-xwblq_default(12c999eb-60ca-49f7-b147-ce52e1b48207)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-xwblq_default(12c999eb-60ca-49f7-b147-ce52e1b48207)"
	Sep 18 19:10:59 ingress-addon-legacy-407320 kubelet[1612]: E0918 19:10:59.633275    1612 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 18 19:10:59 ingress-addon-legacy-407320 kubelet[1612]: E0918 19:10:59.633317    1612 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 18 19:10:59 ingress-addon-legacy-407320 kubelet[1612]: E0918 19:10:59.633362    1612 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 18 19:10:59 ingress-addon-legacy-407320 kubelet[1612]: E0918 19:10:59.633394    1612 pod_workers.go:191] Error syncing pod 4db3360c-fe37-4be0-aee1-6b41e36d61b2 ("kube-ingress-dns-minikube_kube-system(4db3360c-fe37-4be0-aee1-6b41e36d61b2)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Sep 18 19:11:07 ingress-addon-legacy-407320 kubelet[1612]: I0918 19:11:07.885461    1612 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-pqh7h" (UniqueName: "kubernetes.io/secret/4db3360c-fe37-4be0-aee1-6b41e36d61b2-minikube-ingress-dns-token-pqh7h") pod "4db3360c-fe37-4be0-aee1-6b41e36d61b2" (UID: "4db3360c-fe37-4be0-aee1-6b41e36d61b2")
	Sep 18 19:11:07 ingress-addon-legacy-407320 kubelet[1612]: I0918 19:11:07.892656    1612 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4db3360c-fe37-4be0-aee1-6b41e36d61b2-minikube-ingress-dns-token-pqh7h" (OuterVolumeSpecName: "minikube-ingress-dns-token-pqh7h") pod "4db3360c-fe37-4be0-aee1-6b41e36d61b2" (UID: "4db3360c-fe37-4be0-aee1-6b41e36d61b2"). InnerVolumeSpecName "minikube-ingress-dns-token-pqh7h". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 18 19:11:07 ingress-addon-legacy-407320 kubelet[1612]: I0918 19:11:07.985879    1612 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-pqh7h" (UniqueName: "kubernetes.io/secret/4db3360c-fe37-4be0-aee1-6b41e36d61b2-minikube-ingress-dns-token-pqh7h") on node "ingress-addon-legacy-407320" DevicePath ""
	Sep 18 19:11:10 ingress-addon-legacy-407320 kubelet[1612]: E0918 19:11:10.281348    1612 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-nphtg.178613f91552e9d9", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-nphtg", UID:"7a2ed2ae-755b-414f-bd4d-2a5ab3815845", APIVersion:"v1", ResourceVersion:"471", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-407320"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13a461390845dd9, ext:231229605954, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13a461390845dd9, ext:231229605954, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-nphtg.178613f91552e9d9" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 18 19:11:10 ingress-addon-legacy-407320 kubelet[1612]: E0918 19:11:10.297801    1612 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-nphtg.178613f91552e9d9", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-nphtg", UID:"7a2ed2ae-755b-414f-bd4d-2a5ab3815845", APIVersion:"v1", ResourceVersion:"471", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-407320"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc13a461390845dd9, ext:231229605954, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc13a4613910e23de, ext:231238635078, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-nphtg.178613f91552e9d9" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 18 19:11:11 ingress-addon-legacy-407320 kubelet[1612]: I0918 19:11:11.632646    1612 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 51c015836dc9793d931678bef78c2069853e02221ae55abb8492a53e642a6430
	Sep 18 19:11:12 ingress-addon-legacy-407320 kubelet[1612]: I0918 19:11:12.354509    1612 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 51c015836dc9793d931678bef78c2069853e02221ae55abb8492a53e642a6430
	Sep 18 19:11:12 ingress-addon-legacy-407320 kubelet[1612]: I0918 19:11:12.354753    1612 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 55b10022d5f878c509f6292309923b13ee0c3921411c0ce09ea4982a2a10e38a
	Sep 18 19:11:12 ingress-addon-legacy-407320 kubelet[1612]: E0918 19:11:12.354987    1612 pod_workers.go:191] Error syncing pod 12c999eb-60ca-49f7-b147-ce52e1b48207 ("hello-world-app-5f5d8b66bb-xwblq_default(12c999eb-60ca-49f7-b147-ce52e1b48207)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-xwblq_default(12c999eb-60ca-49f7-b147-ce52e1b48207)"
	Sep 18 19:11:13 ingress-addon-legacy-407320 kubelet[1612]: W0918 19:11:13.357195    1612 pod_container_deletor.go:77] Container "c676bc9da468b7e526f09deaeda70bcd5f32aa63c53a12d2b74bd668c5b42e35" not found in pod's containers
	Sep 18 19:11:14 ingress-addon-legacy-407320 kubelet[1612]: I0918 19:11:14.400505    1612 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7a2ed2ae-755b-414f-bd4d-2a5ab3815845-webhook-cert") pod "7a2ed2ae-755b-414f-bd4d-2a5ab3815845" (UID: "7a2ed2ae-755b-414f-bd4d-2a5ab3815845")
	Sep 18 19:11:14 ingress-addon-legacy-407320 kubelet[1612]: I0918 19:11:14.400563    1612 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-hmz45" (UniqueName: "kubernetes.io/secret/7a2ed2ae-755b-414f-bd4d-2a5ab3815845-ingress-nginx-token-hmz45") pod "7a2ed2ae-755b-414f-bd4d-2a5ab3815845" (UID: "7a2ed2ae-755b-414f-bd4d-2a5ab3815845")
	Sep 18 19:11:14 ingress-addon-legacy-407320 kubelet[1612]: I0918 19:11:14.406023    1612 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2ed2ae-755b-414f-bd4d-2a5ab3815845-ingress-nginx-token-hmz45" (OuterVolumeSpecName: "ingress-nginx-token-hmz45") pod "7a2ed2ae-755b-414f-bd4d-2a5ab3815845" (UID: "7a2ed2ae-755b-414f-bd4d-2a5ab3815845"). InnerVolumeSpecName "ingress-nginx-token-hmz45". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 18 19:11:14 ingress-addon-legacy-407320 kubelet[1612]: I0918 19:11:14.407930    1612 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a2ed2ae-755b-414f-bd4d-2a5ab3815845-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7a2ed2ae-755b-414f-bd4d-2a5ab3815845" (UID: "7a2ed2ae-755b-414f-bd4d-2a5ab3815845"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 18 19:11:14 ingress-addon-legacy-407320 kubelet[1612]: I0918 19:11:14.500885    1612 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7a2ed2ae-755b-414f-bd4d-2a5ab3815845-webhook-cert") on node "ingress-addon-legacy-407320" DevicePath ""
	Sep 18 19:11:14 ingress-addon-legacy-407320 kubelet[1612]: I0918 19:11:14.500934    1612 reconciler.go:319] Volume detached for volume "ingress-nginx-token-hmz45" (UniqueName: "kubernetes.io/secret/7a2ed2ae-755b-414f-bd4d-2a5ab3815845-ingress-nginx-token-hmz45") on node "ingress-addon-legacy-407320" DevicePath ""
	
	* 
	* ==> storage-provisioner [1b9b9fc6cb16b0a2c9a7b4f4b4a98efe663481be439fc722c89281b991414020] <==
	* I0918 19:07:55.339537       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0918 19:07:55.352542       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0918 19:07:55.352641       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0918 19:07:55.359337       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0918 19:07:55.360134       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-407320_a816c776-e62f-4b92-ac2a-e3ff2b0ed926!
	I0918 19:07:55.360770       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"973f113c-49cf-4a58-b475-2e4c1c035dc2", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-407320_a816c776-e62f-4b92-ac2a-e3ff2b0ed926 became leader
	I0918 19:07:55.460447       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-407320_a816c776-e62f-4b92-ac2a-e3ff2b0ed926!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-407320 -n ingress-addon-legacy-407320
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-407320 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (180.42s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689235 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689235 -- exec busybox-5bc68d56bd-2bktr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689235 -- exec busybox-5bc68d56bd-2bktr -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-689235 -- exec busybox-5bc68d56bd-2bktr -- sh -c "ping -c 1 192.168.58.1": exit status 1 (248.987683ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-2bktr): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689235 -- exec busybox-5bc68d56bd-rmmxk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689235 -- exec busybox-5bc68d56bd-rmmxk -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-689235 -- exec busybox-5bc68d56bd-rmmxk -- sh -c "ping -c 1 192.168.58.1": exit status 1 (233.059567ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-rmmxk): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-689235
helpers_test.go:235: (dbg) docker inspect multinode-689235:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e0b155a28412be3d94e22f1ca1010ac124c38296f3bdf609ef8b0f402546fbe5",
	        "Created": "2023-09-18T19:17:49.335602824Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 712617,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-18T19:17:49.699393139Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:560a33002deec07a703a16e2b1dbf6aecde4c0d46aaefa1cb6df4c8c8a7774a7",
	        "ResolvConfPath": "/var/lib/docker/containers/e0b155a28412be3d94e22f1ca1010ac124c38296f3bdf609ef8b0f402546fbe5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e0b155a28412be3d94e22f1ca1010ac124c38296f3bdf609ef8b0f402546fbe5/hostname",
	        "HostsPath": "/var/lib/docker/containers/e0b155a28412be3d94e22f1ca1010ac124c38296f3bdf609ef8b0f402546fbe5/hosts",
	        "LogPath": "/var/lib/docker/containers/e0b155a28412be3d94e22f1ca1010ac124c38296f3bdf609ef8b0f402546fbe5/e0b155a28412be3d94e22f1ca1010ac124c38296f3bdf609ef8b0f402546fbe5-json.log",
	        "Name": "/multinode-689235",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-689235:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-689235",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cfed12a38a01a6af4a8fd6714dbbd343610e28c4146a05f980428cabb7f223b2-init/diff:/var/lib/docker/overlay2/4e03e4714bce8b0ad83859c0e431c5abac0520d3520e787a29bac63ee8779cc4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cfed12a38a01a6af4a8fd6714dbbd343610e28c4146a05f980428cabb7f223b2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cfed12a38a01a6af4a8fd6714dbbd343610e28c4146a05f980428cabb7f223b2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cfed12a38a01a6af4a8fd6714dbbd343610e28c4146a05f980428cabb7f223b2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-689235",
	                "Source": "/var/lib/docker/volumes/multinode-689235/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-689235",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-689235",
	                "name.minikube.sigs.k8s.io": "multinode-689235",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5482d753f69a6553d1336a3560b75811d7a5871122b07db36ce61e523394c693",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33490"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33489"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33486"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33488"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33487"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/5482d753f69a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-689235": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e0b155a28412",
	                        "multinode-689235"
	                    ],
	                    "NetworkID": "fb63e8abd7f077eb151b587dcef63da6a9f326a83fa3bb37ce55f37d282f4257",
	                    "EndpointID": "c793c98a0ef9fe344f7e50ca473b3ed582cf8d66c6f86f5d6505bfb7e2623c22",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-689235 -n multinode-689235
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-689235 logs -n 25: (1.887874689s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-850276                           | mount-start-2-850276 | jenkins | v1.31.2 | 18 Sep 23 19:17 UTC | 18 Sep 23 19:17 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-850276 ssh -- ls                    | mount-start-2-850276 | jenkins | v1.31.2 | 18 Sep 23 19:17 UTC | 18 Sep 23 19:17 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-848504                           | mount-start-1-848504 | jenkins | v1.31.2 | 18 Sep 23 19:17 UTC | 18 Sep 23 19:17 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-850276 ssh -- ls                    | mount-start-2-850276 | jenkins | v1.31.2 | 18 Sep 23 19:17 UTC | 18 Sep 23 19:17 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-850276                           | mount-start-2-850276 | jenkins | v1.31.2 | 18 Sep 23 19:17 UTC | 18 Sep 23 19:17 UTC |
	| start   | -p mount-start-2-850276                           | mount-start-2-850276 | jenkins | v1.31.2 | 18 Sep 23 19:17 UTC | 18 Sep 23 19:17 UTC |
	| ssh     | mount-start-2-850276 ssh -- ls                    | mount-start-2-850276 | jenkins | v1.31.2 | 18 Sep 23 19:17 UTC | 18 Sep 23 19:17 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-850276                           | mount-start-2-850276 | jenkins | v1.31.2 | 18 Sep 23 19:17 UTC | 18 Sep 23 19:17 UTC |
	| delete  | -p mount-start-1-848504                           | mount-start-1-848504 | jenkins | v1.31.2 | 18 Sep 23 19:17 UTC | 18 Sep 23 19:17 UTC |
	| start   | -p multinode-689235                               | multinode-689235     | jenkins | v1.31.2 | 18 Sep 23 19:17 UTC | 18 Sep 23 19:19 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-689235 -- apply -f                   | multinode-689235     | jenkins | v1.31.2 | 18 Sep 23 19:19 UTC | 18 Sep 23 19:19 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-689235 -- rollout                    | multinode-689235     | jenkins | v1.31.2 | 18 Sep 23 19:19 UTC | 18 Sep 23 19:19 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-689235 -- get pods -o                | multinode-689235     | jenkins | v1.31.2 | 18 Sep 23 19:19 UTC | 18 Sep 23 19:19 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-689235 -- get pods -o                | multinode-689235     | jenkins | v1.31.2 | 18 Sep 23 19:19 UTC | 18 Sep 23 19:19 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-689235 -- exec                       | multinode-689235     | jenkins | v1.31.2 | 18 Sep 23 19:19 UTC | 18 Sep 23 19:19 UTC |
	|         | busybox-5bc68d56bd-2bktr --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-689235 -- exec                       | multinode-689235     | jenkins | v1.31.2 | 18 Sep 23 19:19 UTC | 18 Sep 23 19:19 UTC |
	|         | busybox-5bc68d56bd-rmmxk --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-689235 -- exec                       | multinode-689235     | jenkins | v1.31.2 | 18 Sep 23 19:19 UTC | 18 Sep 23 19:19 UTC |
	|         | busybox-5bc68d56bd-2bktr --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-689235 -- exec                       | multinode-689235     | jenkins | v1.31.2 | 18 Sep 23 19:19 UTC | 18 Sep 23 19:20 UTC |
	|         | busybox-5bc68d56bd-rmmxk --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-689235 -- exec                       | multinode-689235     | jenkins | v1.31.2 | 18 Sep 23 19:20 UTC | 18 Sep 23 19:20 UTC |
	|         | busybox-5bc68d56bd-2bktr -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-689235 -- exec                       | multinode-689235     | jenkins | v1.31.2 | 18 Sep 23 19:20 UTC | 18 Sep 23 19:20 UTC |
	|         | busybox-5bc68d56bd-rmmxk -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-689235 -- get pods -o                | multinode-689235     | jenkins | v1.31.2 | 18 Sep 23 19:20 UTC | 18 Sep 23 19:20 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-689235 -- exec                       | multinode-689235     | jenkins | v1.31.2 | 18 Sep 23 19:20 UTC | 18 Sep 23 19:20 UTC |
	|         | busybox-5bc68d56bd-2bktr                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-689235 -- exec                       | multinode-689235     | jenkins | v1.31.2 | 18 Sep 23 19:20 UTC |                     |
	|         | busybox-5bc68d56bd-2bktr -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-689235 -- exec                       | multinode-689235     | jenkins | v1.31.2 | 18 Sep 23 19:20 UTC | 18 Sep 23 19:20 UTC |
	|         | busybox-5bc68d56bd-rmmxk                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-689235 -- exec                       | multinode-689235     | jenkins | v1.31.2 | 18 Sep 23 19:20 UTC |                     |
	|         | busybox-5bc68d56bd-rmmxk -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/18 19:17:43
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 19:17:43.637941  712152 out.go:296] Setting OutFile to fd 1 ...
	I0918 19:17:43.638128  712152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 19:17:43.638137  712152 out.go:309] Setting ErrFile to fd 2...
	I0918 19:17:43.638143  712152 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 19:17:43.638424  712152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17263-642665/.minikube/bin
	I0918 19:17:43.638876  712152 out.go:303] Setting JSON to false
	I0918 19:17:43.639914  712152 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10809,"bootTime":1695053855,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0918 19:17:43.639984  712152 start.go:138] virtualization:  
	I0918 19:17:43.644006  712152 out.go:177] * [multinode-689235] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0918 19:17:43.645868  712152 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 19:17:43.647736  712152 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:17:43.646079  712152 notify.go:220] Checking for updates...
	I0918 19:17:43.652070  712152 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 19:17:43.654094  712152 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	I0918 19:17:43.655889  712152 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 19:17:43.657697  712152 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:17:43.659799  712152 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 19:17:43.684350  712152 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0918 19:17:43.684482  712152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:17:43.776989  712152 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-09-18 19:17:43.767096546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 19:17:43.777093  712152 docker.go:294] overlay module found
	I0918 19:17:43.779462  712152 out.go:177] * Using the docker driver based on user configuration
	I0918 19:17:43.781523  712152 start.go:298] selected driver: docker
	I0918 19:17:43.781545  712152 start.go:902] validating driver "docker" against <nil>
	I0918 19:17:43.781560  712152 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:17:43.782196  712152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:17:43.859357  712152 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-09-18 19:17:43.849836885 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 19:17:43.859517  712152 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 19:17:43.859741  712152 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0918 19:17:43.861743  712152 out.go:177] * Using Docker driver with root privileges
	I0918 19:17:43.864095  712152 cni.go:84] Creating CNI manager for ""
	I0918 19:17:43.864117  712152 cni.go:136] 0 nodes found, recommending kindnet
	I0918 19:17:43.864127  712152 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0918 19:17:43.864145  712152 start_flags.go:321] config:
	{Name:multinode-689235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-689235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 19:17:43.866563  712152 out.go:177] * Starting control plane node multinode-689235 in cluster multinode-689235
	I0918 19:17:43.868632  712152 cache.go:122] Beginning downloading kic base image for docker with crio
	I0918 19:17:43.870796  712152 out.go:177] * Pulling base image ...
	I0918 19:17:43.872906  712152 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0918 19:17:43.872909  712152 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I0918 19:17:43.872967  712152 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I0918 19:17:43.872977  712152 cache.go:57] Caching tarball of preloaded images
	I0918 19:17:43.873052  712152 preload.go:174] Found /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0918 19:17:43.873062  712152 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I0918 19:17:43.873415  712152 profile.go:148] Saving config to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/config.json ...
	I0918 19:17:43.873444  712152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/config.json: {Name:mk6428fa597f2c3d5531d0c8c86e830f4d301653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:17:43.891699  712152 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I0918 19:17:43.891722  712152 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I0918 19:17:43.891736  712152 cache.go:195] Successfully downloaded all kic artifacts
	I0918 19:17:43.891766  712152 start.go:365] acquiring machines lock for multinode-689235: {Name:mkede784c62c57f6a9ccf966bea77b41a29cd266 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:17:43.891906  712152 start.go:369] acquired machines lock for "multinode-689235" in 97.822µs
	I0918 19:17:43.891945  712152 start.go:93] Provisioning new machine with config: &{Name:multinode-689235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-689235 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 19:17:43.892089  712152 start.go:125] createHost starting for "" (driver="docker")
	I0918 19:17:43.896163  712152 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0918 19:17:43.896436  712152 start.go:159] libmachine.API.Create for "multinode-689235" (driver="docker")
	I0918 19:17:43.896468  712152 client.go:168] LocalClient.Create starting
	I0918 19:17:43.896559  712152 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem
	I0918 19:17:43.896602  712152 main.go:141] libmachine: Decoding PEM data...
	I0918 19:17:43.896621  712152 main.go:141] libmachine: Parsing certificate...
	I0918 19:17:43.896679  712152 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem
	I0918 19:17:43.896702  712152 main.go:141] libmachine: Decoding PEM data...
	I0918 19:17:43.896713  712152 main.go:141] libmachine: Parsing certificate...
	I0918 19:17:43.897088  712152 cli_runner.go:164] Run: docker network inspect multinode-689235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0918 19:17:43.914420  712152 cli_runner.go:211] docker network inspect multinode-689235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0918 19:17:43.914507  712152 network_create.go:281] running [docker network inspect multinode-689235] to gather additional debugging logs...
	I0918 19:17:43.914570  712152 cli_runner.go:164] Run: docker network inspect multinode-689235
	W0918 19:17:43.931936  712152 cli_runner.go:211] docker network inspect multinode-689235 returned with exit code 1
	I0918 19:17:43.931978  712152 network_create.go:284] error running [docker network inspect multinode-689235]: docker network inspect multinode-689235: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-689235 not found
	I0918 19:17:43.931993  712152 network_create.go:286] output of [docker network inspect multinode-689235]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-689235 not found
	
	** /stderr **
	I0918 19:17:43.932054  712152 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 19:17:43.951304  712152 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0d7b340fbd2d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:fc:f4:37:66} reservation:<nil>}
	I0918 19:17:43.951677  712152 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000e523a0}
	I0918 19:17:43.951702  712152 network_create.go:123] attempt to create docker network multinode-689235 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0918 19:17:43.951765  712152 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-689235 multinode-689235
	I0918 19:17:44.038380  712152 network_create.go:107] docker network multinode-689235 192.168.58.0/24 created
	I0918 19:17:44.038413  712152 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-689235" container
	I0918 19:17:44.038496  712152 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0918 19:17:44.057212  712152 cli_runner.go:164] Run: docker volume create multinode-689235 --label name.minikube.sigs.k8s.io=multinode-689235 --label created_by.minikube.sigs.k8s.io=true
	I0918 19:17:44.077417  712152 oci.go:103] Successfully created a docker volume multinode-689235
	I0918 19:17:44.077506  712152 cli_runner.go:164] Run: docker run --rm --name multinode-689235-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-689235 --entrypoint /usr/bin/test -v multinode-689235:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I0918 19:17:44.684067  712152 oci.go:107] Successfully prepared a docker volume multinode-689235
	I0918 19:17:44.684108  712152 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0918 19:17:44.684128  712152 kic.go:190] Starting extracting preloaded images to volume ...
	I0918 19:17:44.684215  712152 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-689235:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I0918 19:17:49.249298  712152 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-689235:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir: (4.565040935s)
	I0918 19:17:49.249332  712152 kic.go:199] duration metric: took 4.565201 seconds to extract preloaded images to volume
	W0918 19:17:49.249490  712152 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0918 19:17:49.249600  712152 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0918 19:17:49.317003  712152 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-689235 --name multinode-689235 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-689235 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-689235 --network multinode-689235 --ip 192.168.58.2 --volume multinode-689235:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I0918 19:17:49.708452  712152 cli_runner.go:164] Run: docker container inspect multinode-689235 --format={{.State.Running}}
	I0918 19:17:49.736101  712152 cli_runner.go:164] Run: docker container inspect multinode-689235 --format={{.State.Status}}
	I0918 19:17:49.760315  712152 cli_runner.go:164] Run: docker exec multinode-689235 stat /var/lib/dpkg/alternatives/iptables
	I0918 19:17:49.818602  712152 oci.go:144] the created container "multinode-689235" has a running status.
	I0918 19:17:49.818629  712152 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235/id_rsa...
	I0918 19:17:50.036604  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0918 19:17:50.036727  712152 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0918 19:17:50.073136  712152 cli_runner.go:164] Run: docker container inspect multinode-689235 --format={{.State.Status}}
	I0918 19:17:50.124675  712152 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0918 19:17:50.124696  712152 kic_runner.go:114] Args: [docker exec --privileged multinode-689235 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0918 19:17:50.233719  712152 cli_runner.go:164] Run: docker container inspect multinode-689235 --format={{.State.Status}}
	I0918 19:17:50.271531  712152 machine.go:88] provisioning docker machine ...
	I0918 19:17:50.271562  712152 ubuntu.go:169] provisioning hostname "multinode-689235"
	I0918 19:17:50.271636  712152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235
	I0918 19:17:50.297281  712152 main.go:141] libmachine: Using SSH client type: native
	I0918 19:17:50.297731  712152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I0918 19:17:50.297743  712152 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-689235 && echo "multinode-689235" | sudo tee /etc/hostname
	I0918 19:17:50.298418  712152 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0918 19:17:53.454979  712152 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-689235
	
	I0918 19:17:53.455059  712152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235
	I0918 19:17:53.476146  712152 main.go:141] libmachine: Using SSH client type: native
	I0918 19:17:53.476555  712152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I0918 19:17:53.476577  712152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-689235' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-689235/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-689235' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 19:17:53.612964  712152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 19:17:53.613005  712152 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17263-642665/.minikube CaCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17263-642665/.minikube}
	I0918 19:17:53.613045  712152 ubuntu.go:177] setting up certificates
	I0918 19:17:53.613054  712152 provision.go:83] configureAuth start
	I0918 19:17:53.613112  712152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-689235
	I0918 19:17:53.630852  712152 provision.go:138] copyHostCerts
	I0918 19:17:53.630895  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem
	I0918 19:17:53.630927  712152 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem, removing ...
	I0918 19:17:53.630934  712152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem
	I0918 19:17:53.631014  712152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem (1123 bytes)
	I0918 19:17:53.631098  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem
	I0918 19:17:53.631114  712152 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem, removing ...
	I0918 19:17:53.631119  712152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem
	I0918 19:17:53.631145  712152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem (1675 bytes)
	I0918 19:17:53.631200  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem
	I0918 19:17:53.631217  712152 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem, removing ...
	I0918 19:17:53.631221  712152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem
	I0918 19:17:53.631245  712152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem (1082 bytes)
	I0918 19:17:53.631305  712152 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem org=jenkins.multinode-689235 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-689235]
	I0918 19:17:54.001855  712152 provision.go:172] copyRemoteCerts
	I0918 19:17:54.001945  712152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 19:17:54.001998  712152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235
	I0918 19:17:54.024113  712152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235/id_rsa Username:docker}
	I0918 19:17:54.131689  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0918 19:17:54.131755  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 19:17:54.162489  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0918 19:17:54.162553  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0918 19:17:54.192532  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0918 19:17:54.192646  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 19:17:54.222421  712152 provision.go:86] duration metric: configureAuth took 609.352168ms
	I0918 19:17:54.222489  712152 ubuntu.go:193] setting minikube options for container-runtime
	I0918 19:17:54.222728  712152 config.go:182] Loaded profile config "multinode-689235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0918 19:17:54.222854  712152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235
	I0918 19:17:54.241773  712152 main.go:141] libmachine: Using SSH client type: native
	I0918 19:17:54.242210  712152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33490 <nil> <nil>}
	I0918 19:17:54.242231  712152 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 19:17:54.496130  712152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 19:17:54.496153  712152 machine.go:91] provisioned docker machine in 4.224602302s
	I0918 19:17:54.496172  712152 client.go:171] LocalClient.Create took 10.599698381s
	I0918 19:17:54.496184  712152 start.go:167] duration metric: libmachine.API.Create for "multinode-689235" took 10.599750181s
	I0918 19:17:54.496192  712152 start.go:300] post-start starting for "multinode-689235" (driver="docker")
	I0918 19:17:54.496202  712152 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 19:17:54.496274  712152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 19:17:54.496323  712152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235
	I0918 19:17:54.514323  712152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235/id_rsa Username:docker}
	I0918 19:17:54.615077  712152 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 19:17:54.619283  712152 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0918 19:17:54.619310  712152 command_runner.go:130] > NAME="Ubuntu"
	I0918 19:17:54.619318  712152 command_runner.go:130] > VERSION_ID="22.04"
	I0918 19:17:54.619324  712152 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0918 19:17:54.619330  712152 command_runner.go:130] > VERSION_CODENAME=jammy
	I0918 19:17:54.619335  712152 command_runner.go:130] > ID=ubuntu
	I0918 19:17:54.619340  712152 command_runner.go:130] > ID_LIKE=debian
	I0918 19:17:54.619345  712152 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0918 19:17:54.619351  712152 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0918 19:17:54.619358  712152 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0918 19:17:54.619372  712152 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0918 19:17:54.619381  712152 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0918 19:17:54.619457  712152 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0918 19:17:54.619493  712152 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0918 19:17:54.619518  712152 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0918 19:17:54.619532  712152 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0918 19:17:54.619543  712152 filesync.go:126] Scanning /home/jenkins/minikube-integration/17263-642665/.minikube/addons for local assets ...
	I0918 19:17:54.619613  712152 filesync.go:126] Scanning /home/jenkins/minikube-integration/17263-642665/.minikube/files for local assets ...
	I0918 19:17:54.619705  712152 filesync.go:149] local asset: /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem -> 6480032.pem in /etc/ssl/certs
	I0918 19:17:54.619716  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem -> /etc/ssl/certs/6480032.pem
	I0918 19:17:54.619841  712152 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 19:17:54.631092  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem --> /etc/ssl/certs/6480032.pem (1708 bytes)
	I0918 19:17:54.661117  712152 start.go:303] post-start completed in 164.908866ms
	I0918 19:17:54.661561  712152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-689235
	I0918 19:17:54.679452  712152 profile.go:148] Saving config to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/config.json ...
	I0918 19:17:54.679731  712152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 19:17:54.679826  712152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235
	I0918 19:17:54.697865  712152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235/id_rsa Username:docker}
	I0918 19:17:54.794087  712152 command_runner.go:130] > 13%!
	(MISSING)I0918 19:17:54.794171  712152 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0918 19:17:54.800342  712152 command_runner.go:130] > 170G
	I0918 19:17:54.800400  712152 start.go:128] duration metric: createHost completed in 10.908283361s
	I0918 19:17:54.800416  712152 start.go:83] releasing machines lock for "multinode-689235", held for 10.908492593s
	I0918 19:17:54.800532  712152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-689235
	I0918 19:17:54.818995  712152 ssh_runner.go:195] Run: cat /version.json
	I0918 19:17:54.819056  712152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235
	I0918 19:17:54.819312  712152 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 19:17:54.819368  712152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235
	I0918 19:17:54.846885  712152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235/id_rsa Username:docker}
	I0918 19:17:54.848398  712152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235/id_rsa Username:docker}
	I0918 19:17:54.940268  712152 command_runner.go:130] > {"iso_version": "v1.31.0-1694625400-17243", "kicbase_version": "v0.0.40-1694798187-17250", "minikube_version": "v1.31.2", "commit": "c590c2ca0a7db48c4b84c041c2699711a39ab56a"}
	I0918 19:17:54.940596  712152 ssh_runner.go:195] Run: systemctl --version
	I0918 19:17:55.078487  712152 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0918 19:17:55.082282  712152 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.10)
	I0918 19:17:55.082374  712152 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0918 19:17:55.082505  712152 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 19:17:55.237482  712152 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0918 19:17:55.243277  712152 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0918 19:17:55.243302  712152 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0918 19:17:55.243312  712152 command_runner.go:130] > Device: 36h/54d	Inode: 1304403     Links: 1
	I0918 19:17:55.243320  712152 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0918 19:17:55.243327  712152 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0918 19:17:55.243333  712152 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0918 19:17:55.243339  712152 command_runner.go:130] > Change: 2023-09-18 18:55:15.404659531 +0000
	I0918 19:17:55.243345  712152 command_runner.go:130] >  Birth: 2023-09-18 18:55:15.404659531 +0000
	I0918 19:17:55.243636  712152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 19:17:55.269354  712152 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0918 19:17:55.269434  712152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 19:17:55.309437  712152 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0918 19:17:55.309484  712152 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0918 19:17:55.309493  712152 start.go:469] detecting cgroup driver to use...
	I0918 19:17:55.309549  712152 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0918 19:17:55.309624  712152 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 19:17:55.329304  712152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 19:17:55.343578  712152 docker.go:196] disabling cri-docker service (if available) ...
	I0918 19:17:55.343662  712152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 19:17:55.360745  712152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 19:17:55.378015  712152 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 19:17:55.486017  712152 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 19:17:55.594568  712152 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0918 19:17:55.594594  712152 docker.go:212] disabling docker service ...
	I0918 19:17:55.594645  712152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 19:17:55.617418  712152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 19:17:55.631220  712152 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 19:17:55.725866  712152 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0918 19:17:55.725994  712152 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 19:17:55.739853  712152 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0918 19:17:55.840874  712152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 19:17:55.855387  712152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 19:17:55.874463  712152 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0918 19:17:55.875903  712152 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0918 19:17:55.875969  712152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:17:55.888650  712152 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 19:17:55.888729  712152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:17:55.901704  712152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:17:55.913936  712152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:17:55.926625  712152 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 19:17:55.938614  712152 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 19:17:55.948189  712152 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0918 19:17:55.949387  712152 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 19:17:55.959859  712152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:17:56.049115  712152 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 19:17:56.170932  712152 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 19:17:56.171034  712152 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 19:17:56.175890  712152 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0918 19:17:56.175915  712152 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0918 19:17:56.175923  712152 command_runner.go:130] > Device: 43h/67d	Inode: 190         Links: 1
	I0918 19:17:56.175931  712152 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0918 19:17:56.175967  712152 command_runner.go:130] > Access: 2023-09-18 19:17:56.154250191 +0000
	I0918 19:17:56.175984  712152 command_runner.go:130] > Modify: 2023-09-18 19:17:56.154250191 +0000
	I0918 19:17:56.175991  712152 command_runner.go:130] > Change: 2023-09-18 19:17:56.154250191 +0000
	I0918 19:17:56.176000  712152 command_runner.go:130] >  Birth: -
	I0918 19:17:56.176020  712152 start.go:537] Will wait 60s for crictl version
	I0918 19:17:56.176101  712152 ssh_runner.go:195] Run: which crictl
	I0918 19:17:56.180624  712152 command_runner.go:130] > /usr/bin/crictl
	I0918 19:17:56.180926  712152 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 19:17:56.224500  712152 command_runner.go:130] > Version:  0.1.0
	I0918 19:17:56.224532  712152 command_runner.go:130] > RuntimeName:  cri-o
	I0918 19:17:56.224538  712152 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0918 19:17:56.224572  712152 command_runner.go:130] > RuntimeApiVersion:  v1
	I0918 19:17:56.227339  712152 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0918 19:17:56.227459  712152 ssh_runner.go:195] Run: crio --version
	I0918 19:17:56.278989  712152 command_runner.go:130] > crio version 1.24.6
	I0918 19:17:56.279009  712152 command_runner.go:130] > Version:          1.24.6
	I0918 19:17:56.279056  712152 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0918 19:17:56.279070  712152 command_runner.go:130] > GitTreeState:     clean
	I0918 19:17:56.279077  712152 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0918 19:17:56.279083  712152 command_runner.go:130] > GoVersion:        go1.18.2
	I0918 19:17:56.279092  712152 command_runner.go:130] > Compiler:         gc
	I0918 19:17:56.279097  712152 command_runner.go:130] > Platform:         linux/arm64
	I0918 19:17:56.279103  712152 command_runner.go:130] > Linkmode:         dynamic
	I0918 19:17:56.279129  712152 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0918 19:17:56.279140  712152 command_runner.go:130] > SeccompEnabled:   true
	I0918 19:17:56.279146  712152 command_runner.go:130] > AppArmorEnabled:  false
	I0918 19:17:56.281182  712152 ssh_runner.go:195] Run: crio --version
	I0918 19:17:56.328207  712152 command_runner.go:130] > crio version 1.24.6
	I0918 19:17:56.328226  712152 command_runner.go:130] > Version:          1.24.6
	I0918 19:17:56.328235  712152 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0918 19:17:56.328241  712152 command_runner.go:130] > GitTreeState:     clean
	I0918 19:17:56.328249  712152 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0918 19:17:56.328255  712152 command_runner.go:130] > GoVersion:        go1.18.2
	I0918 19:17:56.328261  712152 command_runner.go:130] > Compiler:         gc
	I0918 19:17:56.328266  712152 command_runner.go:130] > Platform:         linux/arm64
	I0918 19:17:56.328273  712152 command_runner.go:130] > Linkmode:         dynamic
	I0918 19:17:56.328283  712152 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0918 19:17:56.328291  712152 command_runner.go:130] > SeccompEnabled:   true
	I0918 19:17:56.328297  712152 command_runner.go:130] > AppArmorEnabled:  false
	I0918 19:17:56.333987  712152 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I0918 19:17:56.336436  712152 cli_runner.go:164] Run: docker network inspect multinode-689235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 19:17:56.353837  712152 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0918 19:17:56.358477  712152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 19:17:56.371559  712152 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0918 19:17:56.371625  712152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 19:17:56.434091  712152 command_runner.go:130] > {
	I0918 19:17:56.434109  712152 command_runner.go:130] >   "images": [
	I0918 19:17:56.434115  712152 command_runner.go:130] >     {
	I0918 19:17:56.434124  712152 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I0918 19:17:56.434130  712152 command_runner.go:130] >       "repoTags": [
	I0918 19:17:56.434138  712152 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0918 19:17:56.434143  712152 command_runner.go:130] >       ],
	I0918 19:17:56.434148  712152 command_runner.go:130] >       "repoDigests": [
	I0918 19:17:56.434159  712152 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0918 19:17:56.434171  712152 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I0918 19:17:56.434178  712152 command_runner.go:130] >       ],
	I0918 19:17:56.434184  712152 command_runner.go:130] >       "size": "60867618",
	I0918 19:17:56.434192  712152 command_runner.go:130] >       "uid": null,
	I0918 19:17:56.434198  712152 command_runner.go:130] >       "username": "",
	I0918 19:17:56.434206  712152 command_runner.go:130] >       "spec": null,
	I0918 19:17:56.434213  712152 command_runner.go:130] >       "pinned": false
	I0918 19:17:56.434218  712152 command_runner.go:130] >     },
	I0918 19:17:56.434223  712152 command_runner.go:130] >     {
	I0918 19:17:56.434233  712152 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0918 19:17:56.434239  712152 command_runner.go:130] >       "repoTags": [
	I0918 19:17:56.434246  712152 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0918 19:17:56.434256  712152 command_runner.go:130] >       ],
	I0918 19:17:56.434261  712152 command_runner.go:130] >       "repoDigests": [
	I0918 19:17:56.434271  712152 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0918 19:17:56.434286  712152 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0918 19:17:56.434290  712152 command_runner.go:130] >       ],
	I0918 19:17:56.434300  712152 command_runner.go:130] >       "size": "29037500",
	I0918 19:17:56.434305  712152 command_runner.go:130] >       "uid": null,
	I0918 19:17:56.434313  712152 command_runner.go:130] >       "username": "",
	I0918 19:17:56.434318  712152 command_runner.go:130] >       "spec": null,
	I0918 19:17:56.434323  712152 command_runner.go:130] >       "pinned": false
	I0918 19:17:56.434331  712152 command_runner.go:130] >     },
	I0918 19:17:56.434336  712152 command_runner.go:130] >     {
	I0918 19:17:56.434346  712152 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0918 19:17:56.434354  712152 command_runner.go:130] >       "repoTags": [
	I0918 19:17:56.434361  712152 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0918 19:17:56.434368  712152 command_runner.go:130] >       ],
	I0918 19:17:56.434377  712152 command_runner.go:130] >       "repoDigests": [
	I0918 19:17:56.434387  712152 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0918 19:17:56.434399  712152 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0918 19:17:56.434404  712152 command_runner.go:130] >       ],
	I0918 19:17:56.434413  712152 command_runner.go:130] >       "size": "51393451",
	I0918 19:17:56.434418  712152 command_runner.go:130] >       "uid": null,
	I0918 19:17:56.434425  712152 command_runner.go:130] >       "username": "",
	I0918 19:17:56.434431  712152 command_runner.go:130] >       "spec": null,
	I0918 19:17:56.434438  712152 command_runner.go:130] >       "pinned": false
	I0918 19:17:56.434443  712152 command_runner.go:130] >     },
	I0918 19:17:56.434448  712152 command_runner.go:130] >     {
	I0918 19:17:56.434458  712152 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0918 19:17:56.434463  712152 command_runner.go:130] >       "repoTags": [
	I0918 19:17:56.434471  712152 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0918 19:17:56.434476  712152 command_runner.go:130] >       ],
	I0918 19:17:56.434481  712152 command_runner.go:130] >       "repoDigests": [
	I0918 19:17:56.434491  712152 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0918 19:17:56.434502  712152 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0918 19:17:56.434510  712152 command_runner.go:130] >       ],
	I0918 19:17:56.434522  712152 command_runner.go:130] >       "size": "182203183",
	I0918 19:17:56.434527  712152 command_runner.go:130] >       "uid": {
	I0918 19:17:56.434532  712152 command_runner.go:130] >         "value": "0"
	I0918 19:17:56.434539  712152 command_runner.go:130] >       },
	I0918 19:17:56.434544  712152 command_runner.go:130] >       "username": "",
	I0918 19:17:56.434549  712152 command_runner.go:130] >       "spec": null,
	I0918 19:17:56.434557  712152 command_runner.go:130] >       "pinned": false
	I0918 19:17:56.434561  712152 command_runner.go:130] >     },
	I0918 19:17:56.434565  712152 command_runner.go:130] >     {
	I0918 19:17:56.434573  712152 command_runner.go:130] >       "id": "30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c",
	I0918 19:17:56.434581  712152 command_runner.go:130] >       "repoTags": [
	I0918 19:17:56.434587  712152 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I0918 19:17:56.434592  712152 command_runner.go:130] >       ],
	I0918 19:17:56.434599  712152 command_runner.go:130] >       "repoDigests": [
	I0918 19:17:56.434609  712152 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d",
	I0918 19:17:56.434621  712152 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I0918 19:17:56.434627  712152 command_runner.go:130] >       ],
	I0918 19:17:56.434634  712152 command_runner.go:130] >       "size": "121054158",
	I0918 19:17:56.434639  712152 command_runner.go:130] >       "uid": {
	I0918 19:17:56.434644  712152 command_runner.go:130] >         "value": "0"
	I0918 19:17:56.434648  712152 command_runner.go:130] >       },
	I0918 19:17:56.434655  712152 command_runner.go:130] >       "username": "",
	I0918 19:17:56.434661  712152 command_runner.go:130] >       "spec": null,
	I0918 19:17:56.434668  712152 command_runner.go:130] >       "pinned": false
	I0918 19:17:56.434672  712152 command_runner.go:130] >     },
	I0918 19:17:56.434677  712152 command_runner.go:130] >     {
	I0918 19:17:56.434687  712152 command_runner.go:130] >       "id": "89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c",
	I0918 19:17:56.434693  712152 command_runner.go:130] >       "repoTags": [
	I0918 19:17:56.434722  712152 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I0918 19:17:56.434726  712152 command_runner.go:130] >       ],
	I0918 19:17:56.434732  712152 command_runner.go:130] >       "repoDigests": [
	I0918 19:17:56.434744  712152 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8",
	I0918 19:17:56.434756  712152 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"
	I0918 19:17:56.434761  712152 command_runner.go:130] >       ],
	I0918 19:17:56.434771  712152 command_runner.go:130] >       "size": "117187380",
	I0918 19:17:56.434776  712152 command_runner.go:130] >       "uid": {
	I0918 19:17:56.434784  712152 command_runner.go:130] >         "value": "0"
	I0918 19:17:56.434788  712152 command_runner.go:130] >       },
	I0918 19:17:56.434794  712152 command_runner.go:130] >       "username": "",
	I0918 19:17:56.434801  712152 command_runner.go:130] >       "spec": null,
	I0918 19:17:56.434806  712152 command_runner.go:130] >       "pinned": false
	I0918 19:17:56.434810  712152 command_runner.go:130] >     },
	I0918 19:17:56.434816  712152 command_runner.go:130] >     {
	I0918 19:17:56.434826  712152 command_runner.go:130] >       "id": "7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa",
	I0918 19:17:56.434832  712152 command_runner.go:130] >       "repoTags": [
	I0918 19:17:56.434838  712152 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I0918 19:17:56.434845  712152 command_runner.go:130] >       ],
	I0918 19:17:56.434850  712152 command_runner.go:130] >       "repoDigests": [
	I0918 19:17:56.434859  712152 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf",
	I0918 19:17:56.434871  712152 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8"
	I0918 19:17:56.434876  712152 command_runner.go:130] >       ],
	I0918 19:17:56.434884  712152 command_runner.go:130] >       "size": "69926807",
	I0918 19:17:56.434892  712152 command_runner.go:130] >       "uid": null,
	I0918 19:17:56.434897  712152 command_runner.go:130] >       "username": "",
	I0918 19:17:56.434905  712152 command_runner.go:130] >       "spec": null,
	I0918 19:17:56.434910  712152 command_runner.go:130] >       "pinned": false
	I0918 19:17:56.434917  712152 command_runner.go:130] >     },
	I0918 19:17:56.434921  712152 command_runner.go:130] >     {
	I0918 19:17:56.434928  712152 command_runner.go:130] >       "id": "64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7",
	I0918 19:17:56.434937  712152 command_runner.go:130] >       "repoTags": [
	I0918 19:17:56.434943  712152 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I0918 19:17:56.434949  712152 command_runner.go:130] >       ],
	I0918 19:17:56.434955  712152 command_runner.go:130] >       "repoDigests": [
	I0918 19:17:56.434997  712152 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I0918 19:17:56.435010  712152 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88"
	I0918 19:17:56.435015  712152 command_runner.go:130] >       ],
	I0918 19:17:56.435020  712152 command_runner.go:130] >       "size": "59188020",
	I0918 19:17:56.435025  712152 command_runner.go:130] >       "uid": {
	I0918 19:17:56.435030  712152 command_runner.go:130] >         "value": "0"
	I0918 19:17:56.435034  712152 command_runner.go:130] >       },
	I0918 19:17:56.435039  712152 command_runner.go:130] >       "username": "",
	I0918 19:17:56.435044  712152 command_runner.go:130] >       "spec": null,
	I0918 19:17:56.435049  712152 command_runner.go:130] >       "pinned": false
	I0918 19:17:56.435057  712152 command_runner.go:130] >     },
	I0918 19:17:56.435062  712152 command_runner.go:130] >     {
	I0918 19:17:56.435073  712152 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0918 19:17:56.435081  712152 command_runner.go:130] >       "repoTags": [
	I0918 19:17:56.435087  712152 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0918 19:17:56.435093  712152 command_runner.go:130] >       ],
	I0918 19:17:56.435098  712152 command_runner.go:130] >       "repoDigests": [
	I0918 19:17:56.435107  712152 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0918 19:17:56.435119  712152 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0918 19:17:56.435123  712152 command_runner.go:130] >       ],
	I0918 19:17:56.435130  712152 command_runner.go:130] >       "size": "520014",
	I0918 19:17:56.435135  712152 command_runner.go:130] >       "uid": {
	I0918 19:17:56.435140  712152 command_runner.go:130] >         "value": "65535"
	I0918 19:17:56.435170  712152 command_runner.go:130] >       },
	I0918 19:17:56.435175  712152 command_runner.go:130] >       "username": "",
	I0918 19:17:56.435181  712152 command_runner.go:130] >       "spec": null,
	I0918 19:17:56.435186  712152 command_runner.go:130] >       "pinned": false
	I0918 19:17:56.435193  712152 command_runner.go:130] >     }
	I0918 19:17:56.435197  712152 command_runner.go:130] >   ]
	I0918 19:17:56.435201  712152 command_runner.go:130] > }
	I0918 19:17:56.438038  712152 crio.go:496] all images are preloaded for cri-o runtime.
	I0918 19:17:56.438060  712152 crio.go:415] Images already preloaded, skipping extraction
	I0918 19:17:56.438117  712152 ssh_runner.go:195] Run: sudo crictl images --output json
	I0918 19:17:56.475413  712152 command_runner.go:130] > {
	I0918 19:17:56.475433  712152 command_runner.go:130] >   "images": [
	I0918 19:17:56.475438  712152 command_runner.go:130] >     {
	I0918 19:17:56.475447  712152 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I0918 19:17:56.475454  712152 command_runner.go:130] >       "repoTags": [
	I0918 19:17:56.475464  712152 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0918 19:17:56.475473  712152 command_runner.go:130] >       ],
	I0918 19:17:56.475479  712152 command_runner.go:130] >       "repoDigests": [
	I0918 19:17:56.475489  712152 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0918 19:17:56.475502  712152 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I0918 19:17:56.475506  712152 command_runner.go:130] >       ],
	I0918 19:17:56.475511  712152 command_runner.go:130] >       "size": "60867618",
	I0918 19:17:56.475516  712152 command_runner.go:130] >       "uid": null,
	I0918 19:17:56.475525  712152 command_runner.go:130] >       "username": "",
	I0918 19:17:56.475533  712152 command_runner.go:130] >       "spec": null,
	I0918 19:17:56.475542  712152 command_runner.go:130] >       "pinned": false
	I0918 19:17:56.475546  712152 command_runner.go:130] >     },
	I0918 19:17:56.475550  712152 command_runner.go:130] >     {
	I0918 19:17:56.475558  712152 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I0918 19:17:56.475563  712152 command_runner.go:130] >       "repoTags": [
	I0918 19:17:56.475572  712152 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0918 19:17:56.475577  712152 command_runner.go:130] >       ],
	I0918 19:17:56.475582  712152 command_runner.go:130] >       "repoDigests": [
	I0918 19:17:56.475592  712152 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I0918 19:17:56.475603  712152 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0918 19:17:56.475610  712152 command_runner.go:130] >       ],
	I0918 19:17:56.475618  712152 command_runner.go:130] >       "size": "29037500",
	I0918 19:17:56.475625  712152 command_runner.go:130] >       "uid": null,
	I0918 19:17:56.475630  712152 command_runner.go:130] >       "username": "",
	I0918 19:17:56.475635  712152 command_runner.go:130] >       "spec": null,
	I0918 19:17:56.475640  712152 command_runner.go:130] >       "pinned": false
	I0918 19:17:56.475649  712152 command_runner.go:130] >     },
	I0918 19:17:56.475655  712152 command_runner.go:130] >     {
	I0918 19:17:56.475663  712152 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I0918 19:17:56.475670  712152 command_runner.go:130] >       "repoTags": [
	I0918 19:17:56.475676  712152 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0918 19:17:56.475681  712152 command_runner.go:130] >       ],
	I0918 19:17:56.475686  712152 command_runner.go:130] >       "repoDigests": [
	I0918 19:17:56.475697  712152 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I0918 19:17:56.475707  712152 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I0918 19:17:56.475714  712152 command_runner.go:130] >       ],
	I0918 19:17:56.475720  712152 command_runner.go:130] >       "size": "51393451",
	I0918 19:17:56.475724  712152 command_runner.go:130] >       "uid": null,
	I0918 19:17:56.475729  712152 command_runner.go:130] >       "username": "",
	I0918 19:17:56.475734  712152 command_runner.go:130] >       "spec": null,
	I0918 19:17:56.475743  712152 command_runner.go:130] >       "pinned": false
	I0918 19:17:56.475748  712152 command_runner.go:130] >     },
	I0918 19:17:56.475755  712152 command_runner.go:130] >     {
	I0918 19:17:56.475762  712152 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I0918 19:17:56.475768  712152 command_runner.go:130] >       "repoTags": [
	I0918 19:17:56.475774  712152 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0918 19:17:56.475795  712152 command_runner.go:130] >       ],
	I0918 19:17:56.475801  712152 command_runner.go:130] >       "repoDigests": [
	I0918 19:17:56.475810  712152 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I0918 19:17:56.475819  712152 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I0918 19:17:56.475830  712152 command_runner.go:130] >       ],
	I0918 19:17:56.475838  712152 command_runner.go:130] >       "size": "182203183",
	I0918 19:17:56.475842  712152 command_runner.go:130] >       "uid": {
	I0918 19:17:56.475847  712152 command_runner.go:130] >         "value": "0"
	I0918 19:17:56.475852  712152 command_runner.go:130] >       },
	I0918 19:17:56.475857  712152 command_runner.go:130] >       "username": "",
	I0918 19:17:56.475863  712152 command_runner.go:130] >       "spec": null,
	I0918 19:17:56.475868  712152 command_runner.go:130] >       "pinned": false
	I0918 19:17:56.475875  712152 command_runner.go:130] >     },
	I0918 19:17:56.475879  712152 command_runner.go:130] >     {
	I0918 19:17:56.475887  712152 command_runner.go:130] >       "id": "30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c",
	I0918 19:17:56.475895  712152 command_runner.go:130] >       "repoTags": [
	I0918 19:17:56.475902  712152 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.2"
	I0918 19:17:56.475912  712152 command_runner.go:130] >       ],
	I0918 19:17:56.475920  712152 command_runner.go:130] >       "repoDigests": [
	I0918 19:17:56.475929  712152 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d",
	I0918 19:17:56.475938  712152 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"
	I0918 19:17:56.475945  712152 command_runner.go:130] >       ],
	I0918 19:17:56.475952  712152 command_runner.go:130] >       "size": "121054158",
	I0918 19:17:56.475959  712152 command_runner.go:130] >       "uid": {
	I0918 19:17:56.475964  712152 command_runner.go:130] >         "value": "0"
	I0918 19:17:56.475971  712152 command_runner.go:130] >       },
	I0918 19:17:56.475976  712152 command_runner.go:130] >       "username": "",
	I0918 19:17:56.475982  712152 command_runner.go:130] >       "spec": null,
	I0918 19:17:56.475987  712152 command_runner.go:130] >       "pinned": false
	I0918 19:17:56.475997  712152 command_runner.go:130] >     },
	I0918 19:17:56.476001  712152 command_runner.go:130] >     {
	I0918 19:17:56.476013  712152 command_runner.go:130] >       "id": "89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c",
	I0918 19:17:56.476018  712152 command_runner.go:130] >       "repoTags": [
	I0918 19:17:56.476025  712152 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.2"
	I0918 19:17:56.476030  712152 command_runner.go:130] >       ],
	I0918 19:17:56.476038  712152 command_runner.go:130] >       "repoDigests": [
	I0918 19:17:56.476051  712152 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8",
	I0918 19:17:56.476060  712152 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"
	I0918 19:17:56.476065  712152 command_runner.go:130] >       ],
	I0918 19:17:56.476072  712152 command_runner.go:130] >       "size": "117187380",
	I0918 19:17:56.476079  712152 command_runner.go:130] >       "uid": {
	I0918 19:17:56.476084  712152 command_runner.go:130] >         "value": "0"
	I0918 19:17:56.476090  712152 command_runner.go:130] >       },
	I0918 19:17:56.476096  712152 command_runner.go:130] >       "username": "",
	I0918 19:17:56.476100  712152 command_runner.go:130] >       "spec": null,
	I0918 19:17:56.476105  712152 command_runner.go:130] >       "pinned": false
	I0918 19:17:56.476117  712152 command_runner.go:130] >     },
	I0918 19:17:56.476121  712152 command_runner.go:130] >     {
	I0918 19:17:56.476129  712152 command_runner.go:130] >       "id": "7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa",
	I0918 19:17:56.476136  712152 command_runner.go:130] >       "repoTags": [
	I0918 19:17:56.476142  712152 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.2"
	I0918 19:17:56.476147  712152 command_runner.go:130] >       ],
	I0918 19:17:56.476155  712152 command_runner.go:130] >       "repoDigests": [
	I0918 19:17:56.476166  712152 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf",
	I0918 19:17:56.476178  712152 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8"
	I0918 19:17:56.476182  712152 command_runner.go:130] >       ],
	I0918 19:17:56.476187  712152 command_runner.go:130] >       "size": "69926807",
	I0918 19:17:56.476199  712152 command_runner.go:130] >       "uid": null,
	I0918 19:17:56.476204  712152 command_runner.go:130] >       "username": "",
	I0918 19:17:56.476209  712152 command_runner.go:130] >       "spec": null,
	I0918 19:17:56.476214  712152 command_runner.go:130] >       "pinned": false
	I0918 19:17:56.476218  712152 command_runner.go:130] >     },
	I0918 19:17:56.476225  712152 command_runner.go:130] >     {
	I0918 19:17:56.476234  712152 command_runner.go:130] >       "id": "64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7",
	I0918 19:17:56.476242  712152 command_runner.go:130] >       "repoTags": [
	I0918 19:17:56.476248  712152 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.2"
	I0918 19:17:56.476255  712152 command_runner.go:130] >       ],
	I0918 19:17:56.476260  712152 command_runner.go:130] >       "repoDigests": [
	I0918 19:17:56.476311  712152 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab",
	I0918 19:17:56.476324  712152 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88"
	I0918 19:17:56.476328  712152 command_runner.go:130] >       ],
	I0918 19:17:56.476334  712152 command_runner.go:130] >       "size": "59188020",
	I0918 19:17:56.476339  712152 command_runner.go:130] >       "uid": {
	I0918 19:17:56.476344  712152 command_runner.go:130] >         "value": "0"
	I0918 19:17:56.476348  712152 command_runner.go:130] >       },
	I0918 19:17:56.476353  712152 command_runner.go:130] >       "username": "",
	I0918 19:17:56.476358  712152 command_runner.go:130] >       "spec": null,
	I0918 19:17:56.476362  712152 command_runner.go:130] >       "pinned": false
	I0918 19:17:56.476367  712152 command_runner.go:130] >     },
	I0918 19:17:56.476371  712152 command_runner.go:130] >     {
	I0918 19:17:56.476378  712152 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I0918 19:17:56.476386  712152 command_runner.go:130] >       "repoTags": [
	I0918 19:17:56.476392  712152 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0918 19:17:56.476399  712152 command_runner.go:130] >       ],
	I0918 19:17:56.476404  712152 command_runner.go:130] >       "repoDigests": [
	I0918 19:17:56.476413  712152 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I0918 19:17:56.476424  712152 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I0918 19:17:56.476429  712152 command_runner.go:130] >       ],
	I0918 19:17:56.476437  712152 command_runner.go:130] >       "size": "520014",
	I0918 19:17:56.476441  712152 command_runner.go:130] >       "uid": {
	I0918 19:17:56.476446  712152 command_runner.go:130] >         "value": "65535"
	I0918 19:17:56.476452  712152 command_runner.go:130] >       },
	I0918 19:17:56.476457  712152 command_runner.go:130] >       "username": "",
	I0918 19:17:56.476462  712152 command_runner.go:130] >       "spec": null,
	I0918 19:17:56.476470  712152 command_runner.go:130] >       "pinned": false
	I0918 19:17:56.476476  712152 command_runner.go:130] >     }
	I0918 19:17:56.476481  712152 command_runner.go:130] >   ]
	I0918 19:17:56.476492  712152 command_runner.go:130] > }
	I0918 19:17:56.479307  712152 crio.go:496] all images are preloaded for cri-o runtime.
	I0918 19:17:56.479330  712152 cache_images.go:84] Images are preloaded, skipping loading
	I0918 19:17:56.479406  712152 ssh_runner.go:195] Run: crio config
	I0918 19:17:56.526932  712152 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0918 19:17:56.526959  712152 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0918 19:17:56.526968  712152 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0918 19:17:56.526973  712152 command_runner.go:130] > #
	I0918 19:17:56.526981  712152 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0918 19:17:56.526989  712152 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0918 19:17:56.527008  712152 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0918 19:17:56.527031  712152 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0918 19:17:56.527041  712152 command_runner.go:130] > # reload'.
	I0918 19:17:56.527049  712152 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0918 19:17:56.527058  712152 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0918 19:17:56.527070  712152 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0918 19:17:56.527077  712152 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0918 19:17:56.527082  712152 command_runner.go:130] > [crio]
	I0918 19:17:56.527093  712152 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0918 19:17:56.527099  712152 command_runner.go:130] > # containers images, in this directory.
	I0918 19:17:56.527109  712152 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0918 19:17:56.527120  712152 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0918 19:17:56.527128  712152 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0918 19:17:56.527139  712152 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0918 19:17:56.527147  712152 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0918 19:17:56.527157  712152 command_runner.go:130] > # storage_driver = "vfs"
	I0918 19:17:56.527164  712152 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0918 19:17:56.527171  712152 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0918 19:17:56.527178  712152 command_runner.go:130] > # storage_option = [
	I0918 19:17:56.527183  712152 command_runner.go:130] > # ]
	I0918 19:17:56.527192  712152 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0918 19:17:56.527204  712152 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0918 19:17:56.527211  712152 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0918 19:17:56.527221  712152 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0918 19:17:56.527230  712152 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0918 19:17:56.527240  712152 command_runner.go:130] > # always happen on a node reboot
	I0918 19:17:56.527247  712152 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0918 19:17:56.527260  712152 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0918 19:17:56.527268  712152 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0918 19:17:56.527277  712152 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0918 19:17:56.527287  712152 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0918 19:17:56.527297  712152 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0918 19:17:56.527310  712152 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0918 19:17:56.527464  712152 command_runner.go:130] > # internal_wipe = true
	I0918 19:17:56.527476  712152 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0918 19:17:56.527485  712152 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0918 19:17:56.527493  712152 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0918 19:17:56.528037  712152 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0918 19:17:56.528065  712152 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0918 19:17:56.528072  712152 command_runner.go:130] > [crio.api]
	I0918 19:17:56.528079  712152 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0918 19:17:56.528478  712152 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0918 19:17:56.528494  712152 command_runner.go:130] > # IP address on which the stream server will listen.
	I0918 19:17:56.528848  712152 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0918 19:17:56.528865  712152 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0918 19:17:56.528872  712152 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0918 19:17:56.529125  712152 command_runner.go:130] > # stream_port = "0"
	I0918 19:17:56.529142  712152 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0918 19:17:56.529418  712152 command_runner.go:130] > # stream_enable_tls = false
	I0918 19:17:56.529434  712152 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0918 19:17:56.529446  712152 command_runner.go:130] > # stream_idle_timeout = ""
	I0918 19:17:56.529461  712152 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0918 19:17:56.529474  712152 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0918 19:17:56.529479  712152 command_runner.go:130] > # minutes.
	I0918 19:17:56.529632  712152 command_runner.go:130] > # stream_tls_cert = ""
	I0918 19:17:56.529646  712152 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0918 19:17:56.529661  712152 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0918 19:17:56.529669  712152 command_runner.go:130] > # stream_tls_key = ""
	I0918 19:17:56.529677  712152 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0918 19:17:56.529685  712152 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0918 19:17:56.529694  712152 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0918 19:17:56.529853  712152 command_runner.go:130] > # stream_tls_ca = ""
	I0918 19:17:56.529892  712152 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0918 19:17:56.529903  712152 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0918 19:17:56.529912  712152 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0918 19:17:56.529918  712152 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0918 19:17:56.529937  712152 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0918 19:17:56.529948  712152 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0918 19:17:56.529953  712152 command_runner.go:130] > [crio.runtime]
	I0918 19:17:56.529960  712152 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0918 19:17:56.529970  712152 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0918 19:17:56.529975  712152 command_runner.go:130] > # "nofile=1024:2048"
	I0918 19:17:56.529983  712152 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0918 19:17:56.529992  712152 command_runner.go:130] > # default_ulimits = [
	I0918 19:17:56.529998  712152 command_runner.go:130] > # ]
	I0918 19:17:56.530005  712152 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0918 19:17:56.530017  712152 command_runner.go:130] > # no_pivot = false
	I0918 19:17:56.530025  712152 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0918 19:17:56.530036  712152 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0918 19:17:56.530043  712152 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0918 19:17:56.530051  712152 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0918 19:17:56.530059  712152 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0918 19:17:56.530068  712152 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0918 19:17:56.530076  712152 command_runner.go:130] > # conmon = ""
	I0918 19:17:56.530081  712152 command_runner.go:130] > # Cgroup setting for conmon
	I0918 19:17:56.530090  712152 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0918 19:17:56.530098  712152 command_runner.go:130] > conmon_cgroup = "pod"
	I0918 19:17:56.530106  712152 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0918 19:17:56.530116  712152 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0918 19:17:56.530124  712152 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0918 19:17:56.530132  712152 command_runner.go:130] > # conmon_env = [
	I0918 19:17:56.530136  712152 command_runner.go:130] > # ]
	I0918 19:17:56.530143  712152 command_runner.go:130] > # Additional environment variables to set for all the
	I0918 19:17:56.530150  712152 command_runner.go:130] > # containers. These are overridden if set in the
	I0918 19:17:56.530159  712152 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0918 19:17:56.530167  712152 command_runner.go:130] > # default_env = [
	I0918 19:17:56.530348  712152 command_runner.go:130] > # ]
	I0918 19:17:56.530363  712152 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0918 19:17:56.530375  712152 command_runner.go:130] > # selinux = false
	I0918 19:17:56.530387  712152 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0918 19:17:56.530395  712152 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0918 19:17:56.530405  712152 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0918 19:17:56.530411  712152 command_runner.go:130] > # seccomp_profile = ""
	I0918 19:17:56.530421  712152 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0918 19:17:56.530429  712152 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0918 19:17:56.530439  712152 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0918 19:17:56.530445  712152 command_runner.go:130] > # which might increase security.
	I0918 19:17:56.530451  712152 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0918 19:17:56.530459  712152 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0918 19:17:56.530469  712152 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0918 19:17:56.530477  712152 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0918 19:17:56.530489  712152 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0918 19:17:56.530496  712152 command_runner.go:130] > # This option supports live configuration reload.
	I0918 19:17:56.530505  712152 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0918 19:17:56.530516  712152 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0918 19:17:56.530525  712152 command_runner.go:130] > # the cgroup blockio controller.
	I0918 19:17:56.530533  712152 command_runner.go:130] > # blockio_config_file = ""
	I0918 19:17:56.530541  712152 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0918 19:17:56.530549  712152 command_runner.go:130] > # irqbalance daemon.
	I0918 19:17:56.530556  712152 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0918 19:17:56.530564  712152 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0918 19:17:56.530573  712152 command_runner.go:130] > # This option supports live configuration reload.
	I0918 19:17:56.530579  712152 command_runner.go:130] > # rdt_config_file = ""
	I0918 19:17:56.530591  712152 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0918 19:17:56.530770  712152 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0918 19:17:56.530784  712152 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0918 19:17:56.530791  712152 command_runner.go:130] > # separate_pull_cgroup = ""
	I0918 19:17:56.530799  712152 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0918 19:17:56.530808  712152 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0918 19:17:56.530813  712152 command_runner.go:130] > # will be added.
	I0918 19:17:56.530819  712152 command_runner.go:130] > # default_capabilities = [
	I0918 19:17:56.530827  712152 command_runner.go:130] > # 	"CHOWN",
	I0918 19:17:56.530832  712152 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0918 19:17:56.530837  712152 command_runner.go:130] > # 	"FSETID",
	I0918 19:17:56.530846  712152 command_runner.go:130] > # 	"FOWNER",
	I0918 19:17:56.530850  712152 command_runner.go:130] > # 	"SETGID",
	I0918 19:17:56.530856  712152 command_runner.go:130] > # 	"SETUID",
	I0918 19:17:56.530863  712152 command_runner.go:130] > # 	"SETPCAP",
	I0918 19:17:56.530869  712152 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0918 19:17:56.530877  712152 command_runner.go:130] > # 	"KILL",
	I0918 19:17:56.530884  712152 command_runner.go:130] > # ]
	I0918 19:17:56.530893  712152 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0918 19:17:56.530903  712152 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0918 19:17:56.530909  712152 command_runner.go:130] > # add_inheritable_capabilities = true
	I0918 19:17:56.530923  712152 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0918 19:17:56.530930  712152 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0918 19:17:56.530940  712152 command_runner.go:130] > # default_sysctls = [
	I0918 19:17:56.530945  712152 command_runner.go:130] > # ]
	I0918 19:17:56.530950  712152 command_runner.go:130] > # List of devices on the host that a
	I0918 19:17:56.530961  712152 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0918 19:17:56.530967  712152 command_runner.go:130] > # allowed_devices = [
	I0918 19:17:56.530972  712152 command_runner.go:130] > # 	"/dev/fuse",
	I0918 19:17:56.531166  712152 command_runner.go:130] > # ]
	I0918 19:17:56.531180  712152 command_runner.go:130] > # List of additional devices. specified as
	I0918 19:17:56.531207  712152 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0918 19:17:56.531218  712152 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0918 19:17:56.531225  712152 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0918 19:17:56.531239  712152 command_runner.go:130] > # additional_devices = [
	I0918 19:17:56.531244  712152 command_runner.go:130] > # ]
	I0918 19:17:56.531256  712152 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0918 19:17:56.531261  712152 command_runner.go:130] > # cdi_spec_dirs = [
	I0918 19:17:56.531270  712152 command_runner.go:130] > # 	"/etc/cdi",
	I0918 19:17:56.531275  712152 command_runner.go:130] > # 	"/var/run/cdi",
	I0918 19:17:56.531280  712152 command_runner.go:130] > # ]
	I0918 19:17:56.531290  712152 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0918 19:17:56.531298  712152 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0918 19:17:56.531307  712152 command_runner.go:130] > # Defaults to false.
	I0918 19:17:56.531314  712152 command_runner.go:130] > # device_ownership_from_security_context = false
	I0918 19:17:56.531326  712152 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0918 19:17:56.531333  712152 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0918 19:17:56.531341  712152 command_runner.go:130] > # hooks_dir = [
	I0918 19:17:56.531347  712152 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0918 19:17:56.531356  712152 command_runner.go:130] > # ]
	I0918 19:17:56.531364  712152 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0918 19:17:56.531372  712152 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0918 19:17:56.531382  712152 command_runner.go:130] > # its default mounts from the following two files:
	I0918 19:17:56.531387  712152 command_runner.go:130] > #
	I0918 19:17:56.531394  712152 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0918 19:17:56.531407  712152 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0918 19:17:56.531414  712152 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0918 19:17:56.531422  712152 command_runner.go:130] > #
	I0918 19:17:56.531430  712152 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0918 19:17:56.531441  712152 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0918 19:17:56.531449  712152 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0918 19:17:56.531458  712152 command_runner.go:130] > #      only add mounts it finds in this file.
	I0918 19:17:56.531463  712152 command_runner.go:130] > #
	I0918 19:17:56.531469  712152 command_runner.go:130] > # default_mounts_file = ""
	I0918 19:17:56.531476  712152 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0918 19:17:56.531485  712152 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0918 19:17:56.531493  712152 command_runner.go:130] > # pids_limit = 0
	I0918 19:17:56.531500  712152 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0918 19:17:56.531512  712152 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0918 19:17:56.531519  712152 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0918 19:17:56.531532  712152 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0918 19:17:56.531538  712152 command_runner.go:130] > # log_size_max = -1
	I0918 19:17:56.531547  712152 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0918 19:17:56.531555  712152 command_runner.go:130] > # log_to_journald = false
	I0918 19:17:56.531566  712152 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0918 19:17:56.531573  712152 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0918 19:17:56.531583  712152 command_runner.go:130] > # Path to directory for container attach sockets.
	I0918 19:17:56.531589  712152 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0918 19:17:56.531599  712152 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0918 19:17:56.531894  712152 command_runner.go:130] > # bind_mount_prefix = ""
	I0918 19:17:56.531910  712152 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0918 19:17:56.531916  712152 command_runner.go:130] > # read_only = false
	I0918 19:17:56.531925  712152 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0918 19:17:56.531934  712152 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0918 19:17:56.531939  712152 command_runner.go:130] > # live configuration reload.
	I0918 19:17:56.531962  712152 command_runner.go:130] > # log_level = "info"
	I0918 19:17:56.531970  712152 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0918 19:17:56.531981  712152 command_runner.go:130] > # This option supports live configuration reload.
	I0918 19:17:56.531986  712152 command_runner.go:130] > # log_filter = ""
	I0918 19:17:56.531998  712152 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0918 19:17:56.532005  712152 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0918 19:17:56.532011  712152 command_runner.go:130] > # separated by comma.
	I0918 19:17:56.532017  712152 command_runner.go:130] > # uid_mappings = ""
	I0918 19:17:56.532024  712152 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0918 19:17:56.532036  712152 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0918 19:17:56.532041  712152 command_runner.go:130] > # separated by comma.
	I0918 19:17:56.532051  712152 command_runner.go:130] > # gid_mappings = ""
	I0918 19:17:56.532059  712152 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0918 19:17:56.532070  712152 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0918 19:17:56.532084  712152 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0918 19:17:56.532091  712152 command_runner.go:130] > # minimum_mappable_uid = -1
	I0918 19:17:56.532103  712152 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0918 19:17:56.532113  712152 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0918 19:17:56.532131  712152 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0918 19:17:56.532142  712152 command_runner.go:130] > # minimum_mappable_gid = -1
	I0918 19:17:56.532150  712152 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0918 19:17:56.532162  712152 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0918 19:17:56.532174  712152 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0918 19:17:56.532179  712152 command_runner.go:130] > # ctr_stop_timeout = 30
	I0918 19:17:56.532187  712152 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0918 19:17:56.532197  712152 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0918 19:17:56.532209  712152 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0918 19:17:56.532218  712152 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0918 19:17:56.532224  712152 command_runner.go:130] > # drop_infra_ctr = true
	I0918 19:17:56.532235  712152 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0918 19:17:56.532242  712152 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0918 19:17:56.532258  712152 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0918 19:17:56.532263  712152 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0918 19:17:56.532271  712152 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0918 19:17:56.532278  712152 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0918 19:17:56.532288  712152 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0918 19:17:56.532297  712152 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0918 19:17:56.532305  712152 command_runner.go:130] > # pinns_path = ""
	I0918 19:17:56.532313  712152 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0918 19:17:56.532325  712152 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0918 19:17:56.532333  712152 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0918 19:17:56.532339  712152 command_runner.go:130] > # default_runtime = "runc"
	I0918 19:17:56.532346  712152 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0918 19:17:56.532356  712152 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0918 19:17:56.532367  712152 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0918 19:17:56.532377  712152 command_runner.go:130] > # creation as a file is not desired either.
	I0918 19:17:56.532387  712152 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0918 19:17:56.532396  712152 command_runner.go:130] > # the hostname is being managed dynamically.
	I0918 19:17:56.532402  712152 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0918 19:17:56.532409  712152 command_runner.go:130] > # ]
	I0918 19:17:56.532417  712152 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0918 19:17:56.532425  712152 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0918 19:17:56.532433  712152 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0918 19:17:56.532441  712152 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0918 19:17:56.532450  712152 command_runner.go:130] > #
	I0918 19:17:56.532456  712152 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0918 19:17:56.532466  712152 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0918 19:17:56.532472  712152 command_runner.go:130] > #  runtime_type = "oci"
	I0918 19:17:56.532483  712152 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0918 19:17:56.532489  712152 command_runner.go:130] > #  privileged_without_host_devices = false
	I0918 19:17:56.532498  712152 command_runner.go:130] > #  allowed_annotations = []
	I0918 19:17:56.532502  712152 command_runner.go:130] > # Where:
	I0918 19:17:56.532509  712152 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0918 19:17:56.532520  712152 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0918 19:17:56.532531  712152 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0918 19:17:56.532540  712152 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0918 19:17:56.532547  712152 command_runner.go:130] > #   in $PATH.
	I0918 19:17:56.532619  712152 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0918 19:17:56.532634  712152 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0918 19:17:56.532643  712152 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0918 19:17:56.532648  712152 command_runner.go:130] > #   state.
	I0918 19:17:56.532661  712152 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0918 19:17:56.532668  712152 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0918 19:17:56.532676  712152 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0918 19:17:56.532699  712152 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0918 19:17:56.532713  712152 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0918 19:17:56.532721  712152 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0918 19:17:56.532727  712152 command_runner.go:130] > #   The currently recognized values are:
	I0918 19:17:56.532735  712152 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0918 19:17:56.532747  712152 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0918 19:17:56.532755  712152 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0918 19:17:56.532766  712152 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0918 19:17:56.532775  712152 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0918 19:17:56.532787  712152 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0918 19:17:56.532795  712152 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0918 19:17:56.532803  712152 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0918 19:17:56.532810  712152 command_runner.go:130] > #   should be moved to the container's cgroup
	I0918 19:17:56.532815  712152 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0918 19:17:56.533206  712152 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0918 19:17:56.533222  712152 command_runner.go:130] > runtime_type = "oci"
	I0918 19:17:56.533231  712152 command_runner.go:130] > runtime_root = "/run/runc"
	I0918 19:17:56.533236  712152 command_runner.go:130] > runtime_config_path = ""
	I0918 19:17:56.533241  712152 command_runner.go:130] > monitor_path = ""
	I0918 19:17:56.533248  712152 command_runner.go:130] > monitor_cgroup = ""
	I0918 19:17:56.533254  712152 command_runner.go:130] > monitor_exec_cgroup = ""
	I0918 19:17:56.533283  712152 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0918 19:17:56.533291  712152 command_runner.go:130] > # running containers
	I0918 19:17:56.533296  712152 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0918 19:17:56.533304  712152 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0918 19:17:56.533317  712152 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0918 19:17:56.533326  712152 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0918 19:17:56.533333  712152 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0918 19:17:56.533339  712152 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0918 19:17:56.533347  712152 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0918 19:17:56.533352  712152 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0918 19:17:56.533366  712152 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0918 19:17:56.533372  712152 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0918 19:17:56.533380  712152 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0918 19:17:56.533387  712152 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0918 19:17:56.533395  712152 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0918 19:17:56.533407  712152 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0918 19:17:56.533416  712152 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0918 19:17:56.533426  712152 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0918 19:17:56.533438  712152 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0918 19:17:56.533450  712152 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0918 19:17:56.533457  712152 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0918 19:17:56.533466  712152 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0918 19:17:56.533471  712152 command_runner.go:130] > # Example:
	I0918 19:17:56.533479  712152 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0918 19:17:56.533485  712152 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0918 19:17:56.533494  712152 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0918 19:17:56.533500  712152 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0918 19:17:56.533505  712152 command_runner.go:130] > # cpuset = 0
	I0918 19:17:56.533512  712152 command_runner.go:130] > # cpushares = "0-1"
	I0918 19:17:56.533516  712152 command_runner.go:130] > # Where:
	I0918 19:17:56.533522  712152 command_runner.go:130] > # The workload name is workload-type.
	I0918 19:17:56.533535  712152 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0918 19:17:56.533541  712152 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0918 19:17:56.533549  712152 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0918 19:17:56.533559  712152 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0918 19:17:56.533568  712152 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0918 19:17:56.533573  712152 command_runner.go:130] > # 
	I0918 19:17:56.533581  712152 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0918 19:17:56.533587  712152 command_runner.go:130] > #
	I0918 19:17:56.533599  712152 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0918 19:17:56.533610  712152 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0918 19:17:56.533683  712152 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0918 19:17:56.533699  712152 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0918 19:17:56.533707  712152 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0918 19:17:56.533712  712152 command_runner.go:130] > [crio.image]
	I0918 19:17:56.533720  712152 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0918 19:17:56.533725  712152 command_runner.go:130] > # default_transport = "docker://"
	I0918 19:17:56.533736  712152 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0918 19:17:56.533758  712152 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0918 19:17:56.533770  712152 command_runner.go:130] > # global_auth_file = ""
	I0918 19:17:56.533778  712152 command_runner.go:130] > # The image used to instantiate infra containers.
	I0918 19:17:56.533784  712152 command_runner.go:130] > # This option supports live configuration reload.
	I0918 19:17:56.533790  712152 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0918 19:17:56.533799  712152 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0918 19:17:56.533808  712152 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0918 19:17:56.533816  712152 command_runner.go:130] > # This option supports live configuration reload.
	I0918 19:17:56.533823  712152 command_runner.go:130] > # pause_image_auth_file = ""
	I0918 19:17:56.533831  712152 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0918 19:17:56.533840  712152 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0918 19:17:56.533848  712152 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0918 19:17:56.533857  712152 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0918 19:17:56.533862  712152 command_runner.go:130] > # pause_command = "/pause"
	I0918 19:17:56.533870  712152 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0918 19:17:56.533878  712152 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0918 19:17:56.533888  712152 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0918 19:17:56.533896  712152 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0918 19:17:56.533903  712152 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0918 19:17:56.533912  712152 command_runner.go:130] > # signature_policy = ""
	I0918 19:17:56.533919  712152 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0918 19:17:56.533930  712152 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0918 19:17:56.533936  712152 command_runner.go:130] > # changing them here.
	I0918 19:17:56.533941  712152 command_runner.go:130] > # insecure_registries = [
	I0918 19:17:56.533945  712152 command_runner.go:130] > # ]
	I0918 19:17:56.533953  712152 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0918 19:17:56.533959  712152 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0918 19:17:56.533970  712152 command_runner.go:130] > # image_volumes = "mkdir"
	I0918 19:17:56.533977  712152 command_runner.go:130] > # Temporary directory to use for storing big files
	I0918 19:17:56.533986  712152 command_runner.go:130] > # big_files_temporary_dir = ""
	I0918 19:17:56.533994  712152 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0918 19:17:56.534001  712152 command_runner.go:130] > # CNI plugins.
	I0918 19:17:56.534006  712152 command_runner.go:130] > [crio.network]
	I0918 19:17:56.534013  712152 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0918 19:17:56.534020  712152 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0918 19:17:56.534026  712152 command_runner.go:130] > # cni_default_network = ""
	I0918 19:17:56.534035  712152 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0918 19:17:56.534041  712152 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0918 19:17:56.534050  712152 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0918 19:17:56.534055  712152 command_runner.go:130] > # plugin_dirs = [
	I0918 19:17:56.534271  712152 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0918 19:17:56.534284  712152 command_runner.go:130] > # ]
	I0918 19:17:56.534292  712152 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0918 19:17:56.534296  712152 command_runner.go:130] > [crio.metrics]
	I0918 19:17:56.534303  712152 command_runner.go:130] > # Globally enable or disable metrics support.
	I0918 19:17:56.534310  712152 command_runner.go:130] > # enable_metrics = false
	I0918 19:17:56.534316  712152 command_runner.go:130] > # Specify enabled metrics collectors.
	I0918 19:17:56.534322  712152 command_runner.go:130] > # Per default all metrics are enabled.
	I0918 19:17:56.534333  712152 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0918 19:17:56.534341  712152 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0918 19:17:56.534352  712152 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0918 19:17:56.534357  712152 command_runner.go:130] > # metrics_collectors = [
	I0918 19:17:56.534362  712152 command_runner.go:130] > # 	"operations",
	I0918 19:17:56.534368  712152 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0918 19:17:56.534376  712152 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0918 19:17:56.534383  712152 command_runner.go:130] > # 	"operations_errors",
	I0918 19:17:56.534389  712152 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0918 19:17:56.534397  712152 command_runner.go:130] > # 	"image_pulls_by_name",
	I0918 19:17:56.534402  712152 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0918 19:17:56.534408  712152 command_runner.go:130] > # 	"image_pulls_failures",
	I0918 19:17:56.534415  712152 command_runner.go:130] > # 	"image_pulls_successes",
	I0918 19:17:56.534420  712152 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0918 19:17:56.534425  712152 command_runner.go:130] > # 	"image_layer_reuse",
	I0918 19:17:56.534433  712152 command_runner.go:130] > # 	"containers_oom_total",
	I0918 19:17:56.534440  712152 command_runner.go:130] > # 	"containers_oom",
	I0918 19:17:56.534445  712152 command_runner.go:130] > # 	"processes_defunct",
	I0918 19:17:56.534450  712152 command_runner.go:130] > # 	"operations_total",
	I0918 19:17:56.534456  712152 command_runner.go:130] > # 	"operations_latency_seconds",
	I0918 19:17:56.534462  712152 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0918 19:17:56.534470  712152 command_runner.go:130] > # 	"operations_errors_total",
	I0918 19:17:56.534475  712152 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0918 19:17:56.534481  712152 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0918 19:17:56.534495  712152 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0918 19:17:56.534504  712152 command_runner.go:130] > # 	"image_pulls_success_total",
	I0918 19:17:56.534510  712152 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0918 19:17:56.534516  712152 command_runner.go:130] > # 	"containers_oom_count_total",
	I0918 19:17:56.534520  712152 command_runner.go:130] > # ]
	I0918 19:17:56.534526  712152 command_runner.go:130] > # The port on which the metrics server will listen.
	I0918 19:17:56.534531  712152 command_runner.go:130] > # metrics_port = 9090
	I0918 19:17:56.534545  712152 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0918 19:17:56.534550  712152 command_runner.go:130] > # metrics_socket = ""
	I0918 19:17:56.534557  712152 command_runner.go:130] > # The certificate for the secure metrics server.
	I0918 19:17:56.534567  712152 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0918 19:17:56.534574  712152 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0918 19:17:56.534582  712152 command_runner.go:130] > # certificate on any modification event.
	I0918 19:17:56.534588  712152 command_runner.go:130] > # metrics_cert = ""
	I0918 19:17:56.534595  712152 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0918 19:17:56.534601  712152 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0918 19:17:56.534606  712152 command_runner.go:130] > # metrics_key = ""
	I0918 19:17:56.534616  712152 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0918 19:17:56.534624  712152 command_runner.go:130] > [crio.tracing]
	I0918 19:17:56.534631  712152 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0918 19:17:56.534638  712152 command_runner.go:130] > # enable_tracing = false
	I0918 19:17:56.534645  712152 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0918 19:17:56.534653  712152 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0918 19:17:56.534660  712152 command_runner.go:130] > # Number of samples to collect per million spans.
	I0918 19:17:56.534668  712152 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0918 19:17:56.534675  712152 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0918 19:17:56.534679  712152 command_runner.go:130] > [crio.stats]
	I0918 19:17:56.534687  712152 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0918 19:17:56.534693  712152 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0918 19:17:56.534707  712152 command_runner.go:130] > # stats_collection_period = 0
	I0918 19:17:56.536518  712152 command_runner.go:130] ! time="2023-09-18 19:17:56.524344896Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0918 19:17:56.536543  712152 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0918 19:17:56.536645  712152 cni.go:84] Creating CNI manager for ""
	I0918 19:17:56.536657  712152 cni.go:136] 1 nodes found, recommending kindnet
	I0918 19:17:56.536687  712152 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0918 19:17:56.536710  712152 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-689235 NodeName:multinode-689235 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 19:17:56.536852  712152 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-689235"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 19:17:56.536940  712152 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-689235 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-689235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0918 19:17:56.537010  712152 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0918 19:17:56.547914  712152 command_runner.go:130] > kubeadm
	I0918 19:17:56.547932  712152 command_runner.go:130] > kubectl
	I0918 19:17:56.547937  712152 command_runner.go:130] > kubelet
	I0918 19:17:56.547950  712152 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 19:17:56.548024  712152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0918 19:17:56.558441  712152 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0918 19:17:56.578929  712152 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 19:17:56.599655  712152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0918 19:17:56.620000  712152 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0918 19:17:56.624338  712152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 19:17:56.637341  712152 certs.go:56] Setting up /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235 for IP: 192.168.58.2
	I0918 19:17:56.637369  712152 certs.go:190] acquiring lock for shared ca certs: {Name:mkb16b377708c2d983623434e9d896d9d8fd7133 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:17:56.637534  712152 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.key
	I0918 19:17:56.637584  712152 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.key
	I0918 19:17:56.637637  712152 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/client.key
	I0918 19:17:56.637652  712152 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/client.crt with IP's: []
	I0918 19:17:57.316831  712152 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/client.crt ...
	I0918 19:17:57.316862  712152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/client.crt: {Name:mk83f4b6b934c909fdf8134bdfcea3ce4f5e1b60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:17:57.317060  712152 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/client.key ...
	I0918 19:17:57.317073  712152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/client.key: {Name:mk2d47900b1ebc20e3ec6f63c625b7c3025621ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:17:57.317167  712152 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/apiserver.key.cee25041
	I0918 19:17:57.317182  712152 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0918 19:17:57.770090  712152 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/apiserver.crt.cee25041 ...
	I0918 19:17:57.770126  712152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/apiserver.crt.cee25041: {Name:mkedceae778294601d74f6b6e0c1c6a72811bd98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:17:57.770359  712152 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/apiserver.key.cee25041 ...
	I0918 19:17:57.770375  712152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/apiserver.key.cee25041: {Name:mkbd3e46d9211d5927075f4fecb99be1254e853c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:17:57.770462  712152 certs.go:337] copying /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/apiserver.crt
	I0918 19:17:57.770546  712152 certs.go:341] copying /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/apiserver.key
	I0918 19:17:57.770610  712152 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/proxy-client.key
	I0918 19:17:57.770626  712152 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/proxy-client.crt with IP's: []
	I0918 19:17:58.493949  712152 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/proxy-client.crt ...
	I0918 19:17:58.493979  712152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/proxy-client.crt: {Name:mkb8ca6d3b8a0f2a16be1604880af17442b0852c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:17:58.494172  712152 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/proxy-client.key ...
	I0918 19:17:58.494185  712152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/proxy-client.key: {Name:mk2fe625749dbfa070837825244e837cb40133fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:17:58.494266  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0918 19:17:58.494290  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0918 19:17:58.494302  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0918 19:17:58.494313  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0918 19:17:58.494324  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0918 19:17:58.494336  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0918 19:17:58.494348  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0918 19:17:58.494364  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0918 19:17:58.494423  712152 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/648003.pem (1338 bytes)
	W0918 19:17:58.494461  712152 certs.go:433] ignoring /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/648003_empty.pem, impossibly tiny 0 bytes
	I0918 19:17:58.494478  712152 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 19:17:58.494503  712152 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem (1082 bytes)
	I0918 19:17:58.494528  712152 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem (1123 bytes)
	I0918 19:17:58.494561  712152 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem (1675 bytes)
	I0918 19:17:58.494610  712152 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem (1708 bytes)
	I0918 19:17:58.494641  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem -> /usr/share/ca-certificates/6480032.pem
	I0918 19:17:58.494656  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:17:58.494669  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/648003.pem -> /usr/share/ca-certificates/648003.pem
	I0918 19:17:58.495358  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0918 19:17:58.523912  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0918 19:17:58.552337  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0918 19:17:58.581386  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0918 19:17:58.609618  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 19:17:58.638357  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 19:17:58.670774  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 19:17:58.699524  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0918 19:17:58.728326  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem --> /usr/share/ca-certificates/6480032.pem (1708 bytes)
	I0918 19:17:58.757427  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 19:17:58.785713  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/certs/648003.pem --> /usr/share/ca-certificates/648003.pem (1338 bytes)
	I0918 19:17:58.814250  712152 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0918 19:17:58.835442  712152 ssh_runner.go:195] Run: openssl version
	I0918 19:17:58.842669  712152 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0918 19:17:58.842785  712152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6480032.pem && ln -fs /usr/share/ca-certificates/6480032.pem /etc/ssl/certs/6480032.pem"
	I0918 19:17:58.855527  712152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6480032.pem
	I0918 19:17:58.860199  712152 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 18 19:02 /usr/share/ca-certificates/6480032.pem
	I0918 19:17:58.860420  712152 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:02 /usr/share/ca-certificates/6480032.pem
	I0918 19:17:58.860479  712152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6480032.pem
	I0918 19:17:58.868851  712152 command_runner.go:130] > 3ec20f2e
	I0918 19:17:58.869225  712152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6480032.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 19:17:58.880951  712152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 19:17:58.892746  712152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:17:58.897491  712152 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 18 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:17:58.897655  712152 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 18 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:17:58.897734  712152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:17:58.905941  712152 command_runner.go:130] > b5213941
	I0918 19:17:58.906344  712152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 19:17:58.917733  712152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/648003.pem && ln -fs /usr/share/ca-certificates/648003.pem /etc/ssl/certs/648003.pem"
	I0918 19:17:58.929431  712152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/648003.pem
	I0918 19:17:58.933776  712152 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 18 19:02 /usr/share/ca-certificates/648003.pem
	I0918 19:17:58.934004  712152 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:02 /usr/share/ca-certificates/648003.pem
	I0918 19:17:58.934064  712152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/648003.pem
	I0918 19:17:58.942140  712152 command_runner.go:130] > 51391683
	I0918 19:17:58.942560  712152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/648003.pem /etc/ssl/certs/51391683.0"
	I0918 19:17:58.955107  712152 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0918 19:17:58.959681  712152 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0918 19:17:58.959711  712152 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0918 19:17:58.959747  712152 kubeadm.go:404] StartCluster: {Name:multinode-689235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-689235 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 19:17:58.959915  712152 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0918 19:17:58.959970  712152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0918 19:17:59.005240  712152 cri.go:89] found id: ""
	I0918 19:17:59.005323  712152 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0918 19:17:59.018589  712152 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0918 19:17:59.018656  712152 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0918 19:17:59.018669  712152 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0918 19:17:59.018750  712152 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0918 19:17:59.029650  712152 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0918 19:17:59.029747  712152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0918 19:17:59.040548  712152 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0918 19:17:59.040581  712152 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0918 19:17:59.040591  712152 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0918 19:17:59.040616  712152 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 19:17:59.040651  712152 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0918 19:17:59.040699  712152 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0918 19:17:59.095376  712152 kubeadm.go:322] [init] Using Kubernetes version: v1.28.2
	I0918 19:17:59.095402  712152 command_runner.go:130] > [init] Using Kubernetes version: v1.28.2
	I0918 19:17:59.095725  712152 kubeadm.go:322] [preflight] Running pre-flight checks
	I0918 19:17:59.095744  712152 command_runner.go:130] > [preflight] Running pre-flight checks
	I0918 19:17:59.140719  712152 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0918 19:17:59.140745  712152 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0918 19:17:59.140797  712152 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1044-aws
	I0918 19:17:59.140805  712152 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1044-aws
	I0918 19:17:59.140837  712152 kubeadm.go:322] OS: Linux
	I0918 19:17:59.140847  712152 command_runner.go:130] > OS: Linux
	I0918 19:17:59.140894  712152 kubeadm.go:322] CGROUPS_CPU: enabled
	I0918 19:17:59.140903  712152 command_runner.go:130] > CGROUPS_CPU: enabled
	I0918 19:17:59.140947  712152 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0918 19:17:59.140955  712152 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0918 19:17:59.140999  712152 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0918 19:17:59.141007  712152 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0918 19:17:59.141051  712152 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0918 19:17:59.141060  712152 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0918 19:17:59.141104  712152 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0918 19:17:59.141111  712152 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0918 19:17:59.141155  712152 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0918 19:17:59.141163  712152 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0918 19:17:59.141205  712152 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0918 19:17:59.141213  712152 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0918 19:17:59.141257  712152 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0918 19:17:59.141264  712152 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0918 19:17:59.141307  712152 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0918 19:17:59.141315  712152 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0918 19:17:59.220916  712152 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 19:17:59.220945  712152 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0918 19:17:59.221065  712152 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 19:17:59.221082  712152 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0918 19:17:59.221169  712152 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 19:17:59.221174  712152 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0918 19:17:59.480171  712152 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 19:17:59.483973  712152 out.go:204]   - Generating certificates and keys ...
	I0918 19:17:59.480368  712152 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0918 19:17:59.484064  712152 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0918 19:17:59.484092  712152 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0918 19:17:59.484203  712152 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0918 19:17:59.484235  712152 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0918 19:17:59.825815  712152 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 19:17:59.825839  712152 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0918 19:18:00.083202  712152 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0918 19:18:00.083230  712152 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0918 19:18:00.907111  712152 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0918 19:18:00.907134  712152 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0918 19:18:01.601549  712152 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0918 19:18:01.601577  712152 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0918 19:18:01.800596  712152 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0918 19:18:01.800623  712152 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0918 19:18:01.800960  712152 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-689235] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0918 19:18:01.800977  712152 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-689235] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0918 19:18:02.523288  712152 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0918 19:18:02.523315  712152 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0918 19:18:02.523955  712152 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-689235] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0918 19:18:02.523972  712152 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-689235] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0918 19:18:03.078141  712152 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 19:18:03.078168  712152 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0918 19:18:03.232986  712152 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 19:18:03.233012  712152 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0918 19:18:03.846649  712152 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0918 19:18:03.846674  712152 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0918 19:18:03.847101  712152 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 19:18:03.847119  712152 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0918 19:18:04.339520  712152 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 19:18:04.339551  712152 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0918 19:18:05.304202  712152 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 19:18:05.304225  712152 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0918 19:18:07.007515  712152 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 19:18:07.007541  712152 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0918 19:18:07.364005  712152 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 19:18:07.364033  712152 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0918 19:18:07.364882  712152 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 19:18:07.364899  712152 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0918 19:18:07.368270  712152 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 19:18:07.370562  712152 out.go:204]   - Booting up control plane ...
	I0918 19:18:07.368363  712152 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0918 19:18:07.370665  712152 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 19:18:07.370680  712152 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0918 19:18:07.371859  712152 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 19:18:07.371881  712152 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0918 19:18:07.373138  712152 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 19:18:07.373156  712152 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0918 19:18:07.385291  712152 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 19:18:07.385321  712152 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 19:18:07.385401  712152 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 19:18:07.385410  712152 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 19:18:07.385447  712152 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0918 19:18:07.385457  712152 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0918 19:18:07.496980  712152 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 19:18:07.497010  712152 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0918 19:18:15.500345  712152 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003399 seconds
	I0918 19:18:15.500368  712152 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.003399 seconds
	I0918 19:18:15.500551  712152 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 19:18:15.500560  712152 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0918 19:18:15.515077  712152 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 19:18:15.515105  712152 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0918 19:18:16.045418  712152 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0918 19:18:16.045449  712152 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0918 19:18:16.045652  712152 kubeadm.go:322] [mark-control-plane] Marking the node multinode-689235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 19:18:16.045671  712152 command_runner.go:130] > [mark-control-plane] Marking the node multinode-689235 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0918 19:18:16.558236  712152 kubeadm.go:322] [bootstrap-token] Using token: e09d46.zbmc0xi9vfda5cit
	I0918 19:18:16.560062  712152 out.go:204]   - Configuring RBAC rules ...
	I0918 19:18:16.558403  712152 command_runner.go:130] > [bootstrap-token] Using token: e09d46.zbmc0xi9vfda5cit
	I0918 19:18:16.560197  712152 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 19:18:16.560206  712152 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0918 19:18:16.565725  712152 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 19:18:16.565746  712152 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0918 19:18:16.575035  712152 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 19:18:16.575063  712152 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0918 19:18:16.579272  712152 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 19:18:16.579298  712152 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0918 19:18:16.583379  712152 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 19:18:16.583404  712152 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0918 19:18:16.591338  712152 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 19:18:16.591368  712152 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0918 19:18:16.607548  712152 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 19:18:16.607594  712152 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0918 19:18:16.832156  712152 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0918 19:18:16.832183  712152 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0918 19:18:16.990487  712152 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0918 19:18:16.990513  712152 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0918 19:18:16.990521  712152 kubeadm.go:322] 
	I0918 19:18:16.990577  712152 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0918 19:18:16.990586  712152 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0918 19:18:16.990590  712152 kubeadm.go:322] 
	I0918 19:18:16.990662  712152 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0918 19:18:16.990670  712152 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0918 19:18:16.990675  712152 kubeadm.go:322] 
	I0918 19:18:16.990707  712152 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0918 19:18:16.990715  712152 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0918 19:18:16.990769  712152 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 19:18:16.990778  712152 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0918 19:18:16.990825  712152 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 19:18:16.990833  712152 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0918 19:18:16.990838  712152 kubeadm.go:322] 
	I0918 19:18:16.990888  712152 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0918 19:18:16.990896  712152 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0918 19:18:16.990900  712152 kubeadm.go:322] 
	I0918 19:18:16.990945  712152 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 19:18:16.990952  712152 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0918 19:18:16.990956  712152 kubeadm.go:322] 
	I0918 19:18:16.991005  712152 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0918 19:18:16.991013  712152 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0918 19:18:16.991088  712152 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 19:18:16.991096  712152 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0918 19:18:16.991160  712152 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 19:18:16.991168  712152 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0918 19:18:16.991172  712152 kubeadm.go:322] 
	I0918 19:18:16.991250  712152 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0918 19:18:16.991258  712152 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0918 19:18:16.991329  712152 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0918 19:18:16.991334  712152 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0918 19:18:16.991338  712152 kubeadm.go:322] 
	I0918 19:18:16.991416  712152 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token e09d46.zbmc0xi9vfda5cit \
	I0918 19:18:16.991421  712152 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token e09d46.zbmc0xi9vfda5cit \
	I0918 19:18:16.991516  712152 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1471e1bb7c66f1f1f8363746a1e5f2ae35a8554d6ad2342a0b3973b70608e7c8 \
	I0918 19:18:16.991521  712152 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1471e1bb7c66f1f1f8363746a1e5f2ae35a8554d6ad2342a0b3973b70608e7c8 \
	I0918 19:18:16.991540  712152 kubeadm.go:322] 	--control-plane 
	I0918 19:18:16.991545  712152 command_runner.go:130] > 	--control-plane 
	I0918 19:18:16.991549  712152 kubeadm.go:322] 
	I0918 19:18:16.991628  712152 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0918 19:18:16.991633  712152 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0918 19:18:16.991637  712152 kubeadm.go:322] 
	I0918 19:18:16.991713  712152 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token e09d46.zbmc0xi9vfda5cit \
	I0918 19:18:16.991717  712152 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token e09d46.zbmc0xi9vfda5cit \
	I0918 19:18:16.991824  712152 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1471e1bb7c66f1f1f8363746a1e5f2ae35a8554d6ad2342a0b3973b70608e7c8 
	I0918 19:18:16.991829  712152 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1471e1bb7c66f1f1f8363746a1e5f2ae35a8554d6ad2342a0b3973b70608e7c8 
	I0918 19:18:16.995418  712152 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-aws\n", err: exit status 1
	I0918 19:18:16.995485  712152 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-aws\n", err: exit status 1
	I0918 19:18:16.995642  712152 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 19:18:16.995668  712152 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 19:18:16.995708  712152 cni.go:84] Creating CNI manager for ""
	I0918 19:18:16.995726  712152 cni.go:136] 1 nodes found, recommending kindnet
	I0918 19:18:16.998557  712152 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0918 19:18:17.000910  712152 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0918 19:18:17.014931  712152 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0918 19:18:17.014952  712152 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0918 19:18:17.014960  712152 command_runner.go:130] > Device: 36h/54d	Inode: 1308310     Links: 1
	I0918 19:18:17.014970  712152 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0918 19:18:17.014977  712152 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0918 19:18:17.014983  712152 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0918 19:18:17.014990  712152 command_runner.go:130] > Change: 2023-09-18 18:55:16.104663439 +0000
	I0918 19:18:17.014997  712152 command_runner.go:130] >  Birth: 2023-09-18 18:55:16.060663193 +0000
	I0918 19:18:17.015707  712152 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I0918 19:18:17.015722  712152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0918 19:18:17.066010  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0918 19:18:17.890898  712152 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0918 19:18:17.899184  712152 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0918 19:18:17.909749  712152 command_runner.go:130] > serviceaccount/kindnet created
	I0918 19:18:17.921780  712152 command_runner.go:130] > daemonset.apps/kindnet created
	I0918 19:18:17.927296  712152 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0918 19:18:17.927454  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:17.927467  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36 minikube.k8s.io/name=multinode-689235 minikube.k8s.io/updated_at=2023_09_18T19_18_17_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:18.174041  712152 command_runner.go:130] > node/multinode-689235 labeled
	I0918 19:18:18.178281  712152 command_runner.go:130] > -16
	I0918 19:18:18.178313  712152 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0918 19:18:18.178336  712152 ops.go:34] apiserver oom_adj: -16
	I0918 19:18:18.178404  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:18.297763  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:18.297855  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:18.394619  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:18.895415  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:18.986768  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:19.395564  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:19.485423  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:19.895059  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:19.983881  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:20.395552  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:20.492042  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:20.895718  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:20.991512  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:21.394903  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:21.486740  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:21.895156  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:21.987757  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:22.395162  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:22.488419  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:22.894896  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:22.987942  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:23.395468  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:23.483057  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:23.895871  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:23.989476  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:24.394824  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:24.484227  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:24.895522  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:24.991147  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:25.395915  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:25.485806  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:25.895176  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:25.995264  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:26.395906  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:26.517407  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:26.895729  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:26.992957  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:27.395221  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:27.489907  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:27.894896  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:27.988957  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:28.395406  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:28.495941  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:28.895852  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:28.989288  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:29.395813  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:29.483982  712152 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0918 19:18:29.895426  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0918 19:18:30.086045  712152 command_runner.go:130] > NAME      SECRETS   AGE
	I0918 19:18:30.086070  712152 command_runner.go:130] > default   0         1s
	I0918 19:18:30.090025  712152 kubeadm.go:1081] duration metric: took 12.162637559s to wait for elevateKubeSystemPrivileges.
	I0918 19:18:30.090057  712152 kubeadm.go:406] StartCluster complete in 31.130312485s
	I0918 19:18:30.090076  712152 settings.go:142] acquiring lock: {Name:mk1cee0139b5f0ae29a168e7793f3f69abc95f11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:18:30.090152  712152 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 19:18:30.091025  712152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17263-642665/kubeconfig: {Name:mkbc55d6d811840d4d5667f8f39c79585e0314ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:18:30.091631  712152 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 19:18:30.091877  712152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0918 19:18:30.092829  712152 kapi.go:59] client config for multinode-689235: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/client.crt", KeyFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/client.key", CAFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1697f50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 19:18:30.094339  712152 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0918 19:18:30.094426  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:30.094437  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:30.094449  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:30.097463  712152 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0918 19:18:30.097682  712152 addons.go:69] Setting storage-provisioner=true in profile "multinode-689235"
	I0918 19:18:30.097720  712152 addons.go:231] Setting addon storage-provisioner=true in "multinode-689235"
	I0918 19:18:30.097815  712152 host.go:66] Checking if "multinode-689235" exists ...
	I0918 19:18:30.097965  712152 cert_rotation.go:137] Starting client certificate rotation controller
	I0918 19:18:30.098383  712152 config.go:182] Loaded profile config "multinode-689235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0918 19:18:30.098440  712152 addons.go:69] Setting default-storageclass=true in profile "multinode-689235"
	I0918 19:18:30.098454  712152 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-689235"
	I0918 19:18:30.099016  712152 cli_runner.go:164] Run: docker container inspect multinode-689235 --format={{.State.Status}}
	I0918 19:18:30.100073  712152 cli_runner.go:164] Run: docker container inspect multinode-689235 --format={{.State.Status}}
	I0918 19:18:30.161772  712152 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 19:18:30.162097  712152 kapi.go:59] client config for multinode-689235: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/client.crt", KeyFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/client.key", CAFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1697f50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 19:18:30.162449  712152 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0918 19:18:30.162458  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:30.162467  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:30.162474  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:30.178347  712152 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0918 19:18:30.180794  712152 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:18:30.180819  712152 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0918 19:18:30.180895  712152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235
	I0918 19:18:30.223740  712152 round_trippers.go:574] Response Status: 200 OK in 129 milliseconds
	I0918 19:18:30.223762  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:30.223771  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:30 GMT
	I0918 19:18:30.223821  712152 round_trippers.go:580]     Audit-Id: ed6ca53d-a276-4a96-b225-c6c13738ee2b
	I0918 19:18:30.223829  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:30.223835  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:30.223845  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:30.223852  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:30.223858  712152 round_trippers.go:580]     Content-Length: 291
	I0918 19:18:30.225636  712152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235/id_rsa Username:docker}
	I0918 19:18:30.236884  712152 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"afeed66f-a03c-4d03-a96f-db9cbbb7a8b0","resourceVersion":"332","creationTimestamp":"2023-09-18T19:18:16Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0918 19:18:30.237371  712152 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"afeed66f-a03c-4d03-a96f-db9cbbb7a8b0","resourceVersion":"332","creationTimestamp":"2023-09-18T19:18:16Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0918 19:18:30.237445  712152 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0918 19:18:30.237452  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:30.237463  712152 round_trippers.go:473]     Content-Type: application/json
	I0918 19:18:30.237470  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:30.237477  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:30.239546  712152 round_trippers.go:574] Response Status: 200 OK in 77 milliseconds
	I0918 19:18:30.239568  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:30.239577  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:30.239584  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:30.239591  712152 round_trippers.go:580]     Content-Length: 109
	I0918 19:18:30.239597  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:30 GMT
	I0918 19:18:30.239603  712152 round_trippers.go:580]     Audit-Id: 7b597046-f692-4ed9-bffb-d212c67c2a95
	I0918 19:18:30.239609  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:30.239615  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:30.248892  712152 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"333"},"items":[]}
	I0918 19:18:30.249232  712152 addons.go:231] Setting addon default-storageclass=true in "multinode-689235"
	I0918 19:18:30.249270  712152 host.go:66] Checking if "multinode-689235" exists ...
	I0918 19:18:30.249821  712152 cli_runner.go:164] Run: docker container inspect multinode-689235 --format={{.State.Status}}
	I0918 19:18:30.286482  712152 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0918 19:18:30.286502  712152 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0918 19:18:30.286566  712152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235
	I0918 19:18:30.318727  712152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235/id_rsa Username:docker}
	I0918 19:18:30.343034  712152 round_trippers.go:574] Response Status: 200 OK in 105 milliseconds
	I0918 19:18:30.343056  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:30.343064  712152 round_trippers.go:580]     Content-Length: 291
	I0918 19:18:30.343071  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:30 GMT
	I0918 19:18:30.343077  712152 round_trippers.go:580]     Audit-Id: 36fbf78f-dbea-449a-8299-8165fc8bf2e4
	I0918 19:18:30.343083  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:30.343089  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:30.343095  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:30.343101  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:30.344225  712152 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"afeed66f-a03c-4d03-a96f-db9cbbb7a8b0","resourceVersion":"338","creationTimestamp":"2023-09-18T19:18:16Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0918 19:18:30.344388  712152 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0918 19:18:30.344395  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:30.344404  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:30.344411  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:30.374443  712152 round_trippers.go:574] Response Status: 200 OK in 30 milliseconds
	I0918 19:18:30.374468  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:30.374477  712152 round_trippers.go:580]     Audit-Id: 9a927dd8-e588-471a-acdf-16acce54e479
	I0918 19:18:30.374484  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:30.374491  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:30.374498  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:30.374504  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:30.374513  712152 round_trippers.go:580]     Content-Length: 291
	I0918 19:18:30.374520  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:30 GMT
	I0918 19:18:30.390661  712152 command_runner.go:130] > apiVersion: v1
	I0918 19:18:30.390688  712152 command_runner.go:130] > data:
	I0918 19:18:30.390694  712152 command_runner.go:130] >   Corefile: |
	I0918 19:18:30.390699  712152 command_runner.go:130] >     .:53 {
	I0918 19:18:30.390704  712152 command_runner.go:130] >         errors
	I0918 19:18:30.390711  712152 command_runner.go:130] >         health {
	I0918 19:18:30.390717  712152 command_runner.go:130] >            lameduck 5s
	I0918 19:18:30.390721  712152 command_runner.go:130] >         }
	I0918 19:18:30.390726  712152 command_runner.go:130] >         ready
	I0918 19:18:30.390737  712152 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0918 19:18:30.390748  712152 command_runner.go:130] >            pods insecure
	I0918 19:18:30.390754  712152 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0918 19:18:30.390760  712152 command_runner.go:130] >            ttl 30
	I0918 19:18:30.390768  712152 command_runner.go:130] >         }
	I0918 19:18:30.390774  712152 command_runner.go:130] >         prometheus :9153
	I0918 19:18:30.390783  712152 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0918 19:18:30.390790  712152 command_runner.go:130] >            max_concurrent 1000
	I0918 19:18:30.390797  712152 command_runner.go:130] >         }
	I0918 19:18:30.390803  712152 command_runner.go:130] >         cache 30
	I0918 19:18:30.390808  712152 command_runner.go:130] >         loop
	I0918 19:18:30.390813  712152 command_runner.go:130] >         reload
	I0918 19:18:30.390821  712152 command_runner.go:130] >         loadbalance
	I0918 19:18:30.390826  712152 command_runner.go:130] >     }
	I0918 19:18:30.390834  712152 command_runner.go:130] > kind: ConfigMap
	I0918 19:18:30.390839  712152 command_runner.go:130] > metadata:
	I0918 19:18:30.390853  712152 command_runner.go:130] >   creationTimestamp: "2023-09-18T19:18:16Z"
	I0918 19:18:30.390858  712152 command_runner.go:130] >   name: coredns
	I0918 19:18:30.390864  712152 command_runner.go:130] >   namespace: kube-system
	I0918 19:18:30.390869  712152 command_runner.go:130] >   resourceVersion: "228"
	I0918 19:18:30.390875  712152 command_runner.go:130] >   uid: cf72949a-08e6-4864-9f03-1f41e78a3411
	I0918 19:18:30.394700  712152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0918 19:18:30.398399  712152 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"afeed66f-a03c-4d03-a96f-db9cbbb7a8b0","resourceVersion":"338","creationTimestamp":"2023-09-18T19:18:16Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0918 19:18:30.398510  712152 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-689235" context rescaled to 1 replicas
	I0918 19:18:30.398541  712152 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0918 19:18:30.403345  712152 out.go:177] * Verifying Kubernetes components...
	I0918 19:18:30.405405  712152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 19:18:30.402468  712152 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0918 19:18:30.492042  712152 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0918 19:18:30.937749  712152 command_runner.go:130] > configmap/coredns replaced
	I0918 19:18:30.943541  712152 start.go:917] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0918 19:18:30.943960  712152 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 19:18:30.944232  712152 kapi.go:59] client config for multinode-689235: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/client.crt", KeyFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/client.key", CAFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1697f50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 19:18:30.944491  712152 node_ready.go:35] waiting up to 6m0s for node "multinode-689235" to be "Ready" ...
	I0918 19:18:30.944584  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:30.944595  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:30.944604  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:30.944614  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:30.950801  712152 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0918 19:18:30.950824  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:30.950834  712152 round_trippers.go:580]     Audit-Id: d6e6ac31-1644-4e66-9e91-23bc2fbc9372
	I0918 19:18:30.950841  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:30.950847  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:30.950853  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:30.950859  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:30.950868  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:30 GMT
	I0918 19:18:30.951366  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:30.952150  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:30.952167  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:30.952178  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:30.952185  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:30.958232  712152 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0918 19:18:30.958255  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:30.958265  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:30.958271  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:30.958278  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:30.958284  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:30.958291  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:30 GMT
	I0918 19:18:30.958297  712152 round_trippers.go:580]     Audit-Id: 0399a184-c584-4d70-826e-6fbbd8deabfe
	I0918 19:18:30.958909  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:31.187740  712152 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0918 19:18:31.187856  712152 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0918 19:18:31.187883  712152 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0918 19:18:31.187921  712152 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0918 19:18:31.187945  712152 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0918 19:18:31.187972  712152 command_runner.go:130] > pod/storage-provisioner created
	I0918 19:18:31.188080  712152 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0918 19:18:31.192066  712152 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0918 19:18:31.193354  712152 addons.go:502] enable addons completed in 1.095913369s: enabled=[storage-provisioner default-storageclass]
	I0918 19:18:31.459978  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:31.460039  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:31.460061  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:31.460082  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:31.462876  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:31.462932  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:31.462941  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:31.462949  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:31 GMT
	I0918 19:18:31.462955  712152 round_trippers.go:580]     Audit-Id: 961056f9-ffde-435f-8f30-a2711ae8989c
	I0918 19:18:31.462966  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:31.462979  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:31.462990  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:31.463094  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:31.959531  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:31.959596  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:31.959630  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:31.959649  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:31.962617  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:31.962684  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:31.962711  712152 round_trippers.go:580]     Audit-Id: 1e19cfbc-849f-46da-93e9-b4131d5d5309
	I0918 19:18:31.962725  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:31.962733  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:31.962739  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:31.962745  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:31.962752  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:31 GMT
	I0918 19:18:31.963145  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:32.459596  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:32.459621  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:32.459632  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:32.459642  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:32.462629  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:32.462745  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:32.462783  712152 round_trippers.go:580]     Audit-Id: 34156305-7c91-4c7e-aac0-7a38cbef6ec2
	I0918 19:18:32.462807  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:32.462825  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:32.462844  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:32.462864  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:32.462890  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:32 GMT
	I0918 19:18:32.463204  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:32.961172  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:32.961196  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:32.961207  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:32.961214  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:32.964638  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:18:32.964662  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:32.964676  712152 round_trippers.go:580]     Audit-Id: a38ccba8-1eb9-4a5a-a2e5-d0f4d8b6ed64
	I0918 19:18:32.964683  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:32.964690  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:32.964696  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:32.964702  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:32.964709  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:32 GMT
	I0918 19:18:32.964933  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:32.965359  712152 node_ready.go:58] node "multinode-689235" has status "Ready":"False"
	I0918 19:18:33.460118  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:33.460139  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:33.460148  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:33.460156  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:33.462637  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:33.462709  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:33.462732  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:33.462750  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:33.462780  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:33.462800  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:33 GMT
	I0918 19:18:33.462813  712152 round_trippers.go:580]     Audit-Id: 7db5af12-8d09-404c-af8f-efa9d0b03dd3
	I0918 19:18:33.462821  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:33.462929  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:33.959693  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:33.959715  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:33.959725  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:33.959733  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:33.962229  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:33.962253  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:33.962262  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:33.962268  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:33.962274  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:33.962281  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:33.962287  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:33 GMT
	I0918 19:18:33.962293  712152 round_trippers.go:580]     Audit-Id: 8c476d3f-4bb0-4ac2-b8a0-a10dd60771b0
	I0918 19:18:33.962555  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:34.460225  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:34.460245  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:34.460255  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:34.460262  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:34.462859  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:34.462881  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:34.462889  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:34.462895  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:34.462902  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:34.462908  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:34 GMT
	I0918 19:18:34.462915  712152 round_trippers.go:580]     Audit-Id: bf14d2c4-4aa2-4028-bc1e-4766848f6ec6
	I0918 19:18:34.462922  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:34.463226  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:34.959558  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:34.959622  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:34.959647  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:34.959659  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:34.962350  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:34.962391  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:34.962399  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:34.962434  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:34.962441  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:34.962447  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:34.962454  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:34 GMT
	I0918 19:18:34.962460  712152 round_trippers.go:580]     Audit-Id: 3b953cf6-a490-47f1-8c8d-499e827b1b28
	I0918 19:18:34.962593  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:35.460197  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:35.460226  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:35.460236  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:35.460243  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:35.462756  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:35.462793  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:35.462802  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:35.462809  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:35.462815  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:35 GMT
	I0918 19:18:35.462821  712152 round_trippers.go:580]     Audit-Id: 16937909-8e94-44a0-824c-67c19737a5c6
	I0918 19:18:35.462827  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:35.462834  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:35.463090  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:35.463498  712152 node_ready.go:58] node "multinode-689235" has status "Ready":"False"
	I0918 19:18:35.960433  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:35.960453  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:35.960463  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:35.960469  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:35.963323  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:35.963348  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:35.963356  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:35.963363  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:35.963369  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:35.963375  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:35 GMT
	I0918 19:18:35.963381  712152 round_trippers.go:580]     Audit-Id: 22fb415b-b3cf-47fb-809f-394669a76261
	I0918 19:18:35.963391  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:35.963677  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:36.459789  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:36.459810  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:36.459820  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:36.459828  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:36.462374  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:36.462394  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:36.462403  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:36.462409  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:36.462415  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:36.462422  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:36 GMT
	I0918 19:18:36.462428  712152 round_trippers.go:580]     Audit-Id: 6c249297-a7e3-47a5-b73d-fd219553903a
	I0918 19:18:36.462434  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:36.462539  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:36.959680  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:36.959711  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:36.959722  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:36.959730  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:36.962815  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:18:36.962837  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:36.962845  712152 round_trippers.go:580]     Audit-Id: ddd0619d-8cc5-4597-9bae-fd3edcfec736
	I0918 19:18:36.962852  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:36.962858  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:36.962864  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:36.962870  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:36.962878  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:36 GMT
	I0918 19:18:36.963026  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:37.459868  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:37.459889  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:37.459899  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:37.459906  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:37.462735  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:37.462760  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:37.462769  712152 round_trippers.go:580]     Audit-Id: c174b598-a2fe-4dbd-a762-2fbd3056ff6f
	I0918 19:18:37.462775  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:37.462783  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:37.462790  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:37.462796  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:37.462803  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:37 GMT
	I0918 19:18:37.462929  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:37.960143  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:37.960163  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:37.960172  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:37.960180  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:37.962874  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:37.962908  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:37.962916  712152 round_trippers.go:580]     Audit-Id: 7075b755-7060-4614-9f6e-2a83fb18d8e7
	I0918 19:18:37.962926  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:37.962933  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:37.962939  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:37.962946  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:37.962954  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:37 GMT
	I0918 19:18:37.963345  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:37.963760  712152 node_ready.go:58] node "multinode-689235" has status "Ready":"False"
	I0918 19:18:38.459574  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:38.459595  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:38.459604  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:38.459611  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:38.462122  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:38.462142  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:38.462151  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:38.462158  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:38.462164  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:38.462170  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:38.462176  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:38 GMT
	I0918 19:18:38.462183  712152 round_trippers.go:580]     Audit-Id: f96e5a2e-a1d7-40ab-b55c-dba492fd316c
	I0918 19:18:38.462595  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:38.960216  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:38.960236  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:38.960245  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:38.960253  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:38.962745  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:38.962765  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:38.962774  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:38 GMT
	I0918 19:18:38.962780  712152 round_trippers.go:580]     Audit-Id: ab86889b-662e-477c-a38f-33ffc1e0bc94
	I0918 19:18:38.962786  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:38.962792  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:38.962798  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:38.962804  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:38.962944  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:39.460124  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:39.460146  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:39.460156  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:39.460163  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:39.462684  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:39.462713  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:39.462725  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:39 GMT
	I0918 19:18:39.462733  712152 round_trippers.go:580]     Audit-Id: 0e2c1e57-6de7-43b4-b1ba-06232af767bd
	I0918 19:18:39.462741  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:39.462747  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:39.462753  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:39.462762  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:39.462863  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:39.959603  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:39.959629  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:39.959639  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:39.959646  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:39.962169  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:39.962196  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:39.962204  712152 round_trippers.go:580]     Audit-Id: a17688dd-0853-4823-ab5c-b9079631f5e0
	I0918 19:18:39.962211  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:39.962217  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:39.962224  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:39.962232  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:39.962245  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:39 GMT
	I0918 19:18:39.962366  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:40.459630  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:40.459657  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:40.459667  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:40.459675  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:40.462128  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:40.462162  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:40.462171  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:40 GMT
	I0918 19:18:40.462177  712152 round_trippers.go:580]     Audit-Id: c9ee7adc-9f81-466e-8803-aab86646ff40
	I0918 19:18:40.462184  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:40.462190  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:40.462196  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:40.462207  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:40.462331  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:40.462800  712152 node_ready.go:58] node "multinode-689235" has status "Ready":"False"
	I0918 19:18:40.960264  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:40.960287  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:40.960297  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:40.960305  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:40.962951  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:40.962973  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:40.962982  712152 round_trippers.go:580]     Audit-Id: 98abf50a-4ced-49e5-a449-abdcf3836c1f
	I0918 19:18:40.962989  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:40.962995  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:40.963001  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:40.963007  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:40.963014  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:40 GMT
	I0918 19:18:40.963148  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:41.460330  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:41.460355  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:41.460365  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:41.460372  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:41.462987  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:41.463012  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:41.463021  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:41 GMT
	I0918 19:18:41.463027  712152 round_trippers.go:580]     Audit-Id: 604169da-00f6-4b17-9c9e-edbf05dbc172
	I0918 19:18:41.463034  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:41.463040  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:41.463046  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:41.463052  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:41.463178  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:41.959539  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:41.959560  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:41.959569  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:41.959577  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:41.962030  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:41.962052  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:41.962060  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:41 GMT
	I0918 19:18:41.962068  712152 round_trippers.go:580]     Audit-Id: fe1339e0-59cb-4414-8313-2f6123c7710c
	I0918 19:18:41.962075  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:41.962081  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:41.962087  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:41.962093  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:41.962235  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:42.460435  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:42.460454  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:42.460464  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:42.460472  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:42.462976  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:42.462997  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:42.463006  712152 round_trippers.go:580]     Audit-Id: e348b095-be13-4ad5-aa69-06724630fb44
	I0918 19:18:42.463012  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:42.463018  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:42.463024  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:42.463030  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:42.463037  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:42 GMT
	I0918 19:18:42.463148  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:42.463554  712152 node_ready.go:58] node "multinode-689235" has status "Ready":"False"
	I0918 19:18:42.959996  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:42.960018  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:42.960027  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:42.960035  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:42.962732  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:42.962753  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:42.962764  712152 round_trippers.go:580]     Audit-Id: 361467a1-26d1-4d1f-a573-ccd69b52ca69
	I0918 19:18:42.962771  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:42.962777  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:42.962783  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:42.962789  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:42.962795  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:42 GMT
	I0918 19:18:42.962913  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:43.460343  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:43.460367  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:43.460378  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:43.460386  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:43.462860  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:43.462884  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:43.462893  712152 round_trippers.go:580]     Audit-Id: d26d069b-a34d-45c3-bca9-128e95a59e79
	I0918 19:18:43.462899  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:43.462905  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:43.462912  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:43.462918  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:43.462925  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:43 GMT
	I0918 19:18:43.463034  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:43.960419  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:43.960439  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:43.960449  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:43.960456  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:43.963147  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:43.963174  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:43.963184  712152 round_trippers.go:580]     Audit-Id: d24519ec-4e55-4ca4-a16f-24cefd2471da
	I0918 19:18:43.963190  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:43.963197  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:43.963203  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:43.963209  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:43.963215  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:43 GMT
	I0918 19:18:43.963321  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:44.460464  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:44.460484  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:44.460493  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:44.460500  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:44.463086  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:44.463106  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:44.463115  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:44.463122  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:44.463129  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:44 GMT
	I0918 19:18:44.463135  712152 round_trippers.go:580]     Audit-Id: 4a63cadc-eee8-47be-90cf-f073eab09253
	I0918 19:18:44.463141  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:44.463147  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:44.463315  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:44.463706  712152 node_ready.go:58] node "multinode-689235" has status "Ready":"False"
	I0918 19:18:44.959446  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:44.959470  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:44.959480  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:44.959487  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:44.961949  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:44.961969  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:44.961977  712152 round_trippers.go:580]     Audit-Id: 47472f37-3cec-4f8a-b423-bed97000a5fa
	I0918 19:18:44.961983  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:44.961989  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:44.961996  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:44.962002  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:44.962008  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:44 GMT
	I0918 19:18:44.962153  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:45.459818  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:45.459840  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:45.459858  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:45.459866  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:45.472138  712152 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0918 19:18:45.472160  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:45.472168  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:45.472175  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:45.472181  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:45.472187  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:45.472193  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:45 GMT
	I0918 19:18:45.472199  712152 round_trippers.go:580]     Audit-Id: 7487b76b-ea4c-4d3b-8fca-07df074b79df
	I0918 19:18:45.472671  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:45.959498  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:45.959538  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:45.959547  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:45.959554  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:45.962144  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:45.962165  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:45.962174  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:45.962181  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:45 GMT
	I0918 19:18:45.962187  712152 round_trippers.go:580]     Audit-Id: 1c4ed63f-0340-45dd-9e1b-2520407b39a9
	I0918 19:18:45.962193  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:45.962199  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:45.962205  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:45.962371  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:46.459467  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:46.459489  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:46.459500  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:46.459508  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:46.462086  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:46.462109  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:46.462118  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:46.462125  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:46 GMT
	I0918 19:18:46.462131  712152 round_trippers.go:580]     Audit-Id: ef9144ef-4ea6-453f-9324-a02124277b4d
	I0918 19:18:46.462137  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:46.462143  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:46.462154  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:46.462306  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:46.960435  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:46.960456  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:46.960466  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:46.960473  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:46.963589  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:18:46.963616  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:46.963625  712152 round_trippers.go:580]     Audit-Id: ac4b895b-a4dd-438d-bfc4-69c4f2c68ff6
	I0918 19:18:46.963631  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:46.963637  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:46.963645  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:46.963652  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:46.963658  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:46 GMT
	I0918 19:18:46.963798  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:46.964192  712152 node_ready.go:58] node "multinode-689235" has status "Ready":"False"
	I0918 19:18:47.459898  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:47.459922  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:47.459932  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:47.459940  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:47.462454  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:47.462476  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:47.462485  712152 round_trippers.go:580]     Audit-Id: 551b2bd1-5cd1-4782-8c2a-3b9caaea7986
	I0918 19:18:47.462492  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:47.462498  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:47.462504  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:47.462510  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:47.462517  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:47 GMT
	I0918 19:18:47.462725  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:47.960013  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:47.960037  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:47.960048  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:47.960055  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:47.963538  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:18:47.963565  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:47.963574  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:47.963581  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:47.963588  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:47 GMT
	I0918 19:18:47.963594  712152 round_trippers.go:580]     Audit-Id: 459aba11-41f3-417d-8c95-d015162abc29
	I0918 19:18:47.963601  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:47.963609  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:47.963848  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:48.459531  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:48.459552  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:48.459569  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:48.459577  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:48.462143  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:48.462165  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:48.462173  712152 round_trippers.go:580]     Audit-Id: 4047696d-1d0c-4578-8c6d-27d94b8e5302
	I0918 19:18:48.462179  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:48.462185  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:48.462192  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:48.462198  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:48.462204  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:48 GMT
	I0918 19:18:48.462301  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:48.959559  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:48.959590  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:48.959599  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:48.959606  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:48.962316  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:48.962337  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:48.962346  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:48 GMT
	I0918 19:18:48.962352  712152 round_trippers.go:580]     Audit-Id: e7e317c8-2839-4107-a2b3-df149e19cf3c
	I0918 19:18:48.962358  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:48.962364  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:48.962370  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:48.962376  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:48.962489  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:49.459499  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:49.459531  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:49.459542  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:49.459556  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:49.462706  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:18:49.462729  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:49.462738  712152 round_trippers.go:580]     Audit-Id: 3d909b99-e6b8-4102-a28a-c38373683824
	I0918 19:18:49.462745  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:49.462751  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:49.462757  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:49.462763  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:49.462770  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:49 GMT
	I0918 19:18:49.462954  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:49.463371  712152 node_ready.go:58] node "multinode-689235" has status "Ready":"False"
	I0918 19:18:49.960420  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:49.960440  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:49.960449  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:49.960456  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:49.963715  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:18:49.963738  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:49.963746  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:49.963753  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:49.963760  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:49 GMT
	I0918 19:18:49.963766  712152 round_trippers.go:580]     Audit-Id: c2951265-45cb-441c-a065-dc8ea066551f
	I0918 19:18:49.963772  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:49.963797  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:49.963927  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:50.460196  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:50.460221  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:50.460230  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:50.460237  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:50.462817  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:50.462843  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:50.462852  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:50.462859  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:50.462865  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:50.462872  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:50.462878  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:50 GMT
	I0918 19:18:50.462885  712152 round_trippers.go:580]     Audit-Id: 94154284-4371-45ea-a425-7337e5508483
	I0918 19:18:50.463001  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:50.960135  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:50.960174  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:50.960184  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:50.960192  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:50.962869  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:50.962895  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:50.962903  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:50 GMT
	I0918 19:18:50.962910  712152 round_trippers.go:580]     Audit-Id: 907a312c-5789-452d-9a5f-31d31850fca8
	I0918 19:18:50.962916  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:50.962923  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:50.962929  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:50.962936  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:50.963087  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:51.460199  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:51.460220  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:51.460229  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:51.460237  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:51.462836  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:51.462861  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:51.462871  712152 round_trippers.go:580]     Audit-Id: 1b399262-46bb-4eae-afc7-07e4f5aae1cb
	I0918 19:18:51.462877  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:51.462883  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:51.462889  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:51.462896  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:51.462909  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:51 GMT
	I0918 19:18:51.463187  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:51.463586  712152 node_ready.go:58] node "multinode-689235" has status "Ready":"False"
	I0918 19:18:51.960434  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:51.960458  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:51.960468  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:51.960476  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:51.963645  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:18:51.963667  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:51.963676  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:51.963683  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:51 GMT
	I0918 19:18:51.963690  712152 round_trippers.go:580]     Audit-Id: 8fa8cb73-3efa-4819-a1fe-38088e3d8437
	I0918 19:18:51.963696  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:51.963702  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:51.963708  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:51.963853  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:52.459562  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:52.459585  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:52.459599  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:52.459627  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:52.462334  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:52.462358  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:52.462366  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:52.462373  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:52.462379  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:52.462385  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:52.462392  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:52 GMT
	I0918 19:18:52.462398  712152 round_trippers.go:580]     Audit-Id: ce56fe86-4e12-46c2-8e91-84a3219446ea
	I0918 19:18:52.462759  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:52.960333  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:52.960356  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:52.960365  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:52.960374  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:52.962860  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:52.962887  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:52.962896  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:52 GMT
	I0918 19:18:52.962903  712152 round_trippers.go:580]     Audit-Id: 1ad3f11c-749c-4f55-8bf7-3c97abfe7bcc
	I0918 19:18:52.962909  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:52.962915  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:52.962924  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:52.962931  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:52.963307  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:53.460492  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:53.460516  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:53.460527  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:53.460535  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:53.463020  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:53.463043  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:53.463052  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:53.463059  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:53.463065  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:53.463072  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:53 GMT
	I0918 19:18:53.463083  712152 round_trippers.go:580]     Audit-Id: bacc1ab7-6052-4677-bbe2-34caf9789a7b
	I0918 19:18:53.463090  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:53.463339  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:53.463745  712152 node_ready.go:58] node "multinode-689235" has status "Ready":"False"
	I0918 19:18:53.959528  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:53.959552  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:53.959562  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:53.959569  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:53.962158  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:53.962185  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:53.962199  712152 round_trippers.go:580]     Audit-Id: 8cf3b0af-d157-48e8-96b4-e01254b3386e
	I0918 19:18:53.962206  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:53.962212  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:53.962218  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:53.962225  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:53.962235  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:53 GMT
	I0918 19:18:53.962369  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:54.460081  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:54.460106  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:54.460116  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:54.460123  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:54.462702  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:54.462727  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:54.462736  712152 round_trippers.go:580]     Audit-Id: e269d1f0-8ea9-4404-8dd2-85c6d7658a8d
	I0918 19:18:54.462744  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:54.462752  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:54.462758  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:54.462765  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:54.462776  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:54 GMT
	I0918 19:18:54.463056  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:54.960201  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:54.960223  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:54.960233  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:54.960240  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:54.964085  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:18:54.964106  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:54.964115  712152 round_trippers.go:580]     Audit-Id: 4f6c95e5-0049-47b8-ab41-4b014877a335
	I0918 19:18:54.964121  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:54.964127  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:54.964133  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:54.964140  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:54.964146  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:54 GMT
	I0918 19:18:54.964307  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:55.460406  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:55.460436  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:55.460447  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:55.460454  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:55.462842  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:55.462867  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:55.462875  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:55.462882  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:55.462888  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:55.462895  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:55.462901  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:55 GMT
	I0918 19:18:55.462907  712152 round_trippers.go:580]     Audit-Id: 71e8e4ab-65d8-4e43-b8d5-f01b44505bdb
	I0918 19:18:55.463006  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:55.960129  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:55.960155  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:55.960164  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:55.960172  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:55.963463  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:18:55.963491  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:55.963499  712152 round_trippers.go:580]     Audit-Id: 8bb78928-6c18-4903-b214-4e631b8f4ff0
	I0918 19:18:55.963506  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:55.963512  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:55.963518  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:55.963526  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:55.963533  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:55 GMT
	I0918 19:18:55.963706  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:55.964126  712152 node_ready.go:58] node "multinode-689235" has status "Ready":"False"
	I0918 19:18:56.459603  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:56.459624  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:56.459634  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:56.459641  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:56.462286  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:56.462307  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:56.462315  712152 round_trippers.go:580]     Audit-Id: cf678f95-5621-47ba-ad9a-00b21845895e
	I0918 19:18:56.462322  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:56.462328  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:56.462334  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:56.462340  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:56.462346  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:56 GMT
	I0918 19:18:56.462439  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:56.960446  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:56.960467  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:56.960477  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:56.960484  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:56.963545  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:18:56.963568  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:56.963577  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:56.963583  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:56.963590  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:56.963596  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:56 GMT
	I0918 19:18:56.963603  712152 round_trippers.go:580]     Audit-Id: b2e3fb43-15a8-4fef-a794-88fafc8d5145
	I0918 19:18:56.963613  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:56.964139  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:57.460300  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:57.460320  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:57.460330  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:57.460337  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:57.462787  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:57.462808  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:57.462817  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:57.462824  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:57.462831  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:57.462837  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:57.462844  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:57 GMT
	I0918 19:18:57.462857  712152 round_trippers.go:580]     Audit-Id: 819ce327-3abf-42ce-a616-f71dc3814f9d
	I0918 19:18:57.463052  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:57.960272  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:57.960292  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:57.960301  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:57.960309  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:57.962844  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:57.962868  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:57.962877  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:57.962883  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:57.962891  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:57.962899  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:57 GMT
	I0918 19:18:57.962905  712152 round_trippers.go:580]     Audit-Id: 0d350bf4-052d-43c3-8a7a-743e666df8b0
	I0918 19:18:57.962911  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:57.963137  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:58.460213  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:58.460237  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:58.460247  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:58.460256  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:58.462841  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:58.462861  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:58.462869  712152 round_trippers.go:580]     Audit-Id: a61e2b82-9af3-4cd0-8729-95fc2de08952
	I0918 19:18:58.462876  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:58.462882  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:58.462888  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:58.462895  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:58.462901  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:58 GMT
	I0918 19:18:58.463016  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:58.463417  712152 node_ready.go:58] node "multinode-689235" has status "Ready":"False"
	I0918 19:18:58.960454  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:58.960486  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:58.960497  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:58.960508  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:58.962955  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:58.962981  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:58.962989  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:58.962996  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:58.963002  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:58 GMT
	I0918 19:18:58.963008  712152 round_trippers.go:580]     Audit-Id: d805bb89-4638-4ee9-ab7a-de0f9ada34ec
	I0918 19:18:58.963015  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:58.963024  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:58.963434  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:59.460148  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:59.460173  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:59.460183  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:59.460191  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:59.462649  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:59.462676  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:59.462685  712152 round_trippers.go:580]     Audit-Id: dcec595c-70de-4f98-a508-210fcf5fcfdf
	I0918 19:18:59.462692  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:59.462698  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:59.462704  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:59.462710  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:59.462716  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:59 GMT
	I0918 19:18:59.462841  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:18:59.959538  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:18:59.959559  712152 round_trippers.go:469] Request Headers:
	I0918 19:18:59.959569  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:18:59.959576  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:18:59.962092  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:18:59.962112  712152 round_trippers.go:577] Response Headers:
	I0918 19:18:59.962121  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:18:59.962129  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:18:59 GMT
	I0918 19:18:59.962135  712152 round_trippers.go:580]     Audit-Id: d78c0007-d4fd-4347-9f06-200ac643e448
	I0918 19:18:59.962141  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:18:59.962147  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:18:59.962153  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:18:59.962269  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:19:00.460499  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:00.460526  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:00.460543  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:00.460550  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:00.463586  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:19:00.463610  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:00.463619  712152 round_trippers.go:580]     Audit-Id: 1d0eaf55-ff54-4bd1-bb6d-2e29ae131b6d
	I0918 19:19:00.463626  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:00.463632  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:00.463638  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:00.463644  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:00.463651  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:00 GMT
	I0918 19:19:00.464259  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:19:00.464672  712152 node_ready.go:58] node "multinode-689235" has status "Ready":"False"
	I0918 19:19:00.960423  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:00.960449  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:00.960459  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:00.960467  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:00.963485  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:19:00.963505  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:00.963514  712152 round_trippers.go:580]     Audit-Id: a4b85ff2-3e96-4a63-a4b8-5a9f8e783648
	I0918 19:19:00.963520  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:00.963527  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:00.963533  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:00.963539  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:00.963546  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:00 GMT
	I0918 19:19:00.963726  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:19:01.459546  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:01.459568  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:01.459578  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:01.459585  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:01.462071  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:01.462097  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:01.462106  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:01.462112  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:01.462121  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:01 GMT
	I0918 19:19:01.462127  712152 round_trippers.go:580]     Audit-Id: 1e65901e-cc92-4ef9-8d08-74ec9eec61f2
	I0918 19:19:01.462135  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:01.462147  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:01.463450  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:19:01.960150  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:01.960175  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:01.960193  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:01.960201  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:01.962949  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:01.962975  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:01.962984  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:01.962991  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:01.962997  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:01.963003  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:01 GMT
	I0918 19:19:01.963010  712152 round_trippers.go:580]     Audit-Id: 15091be8-8907-447b-b1f0-a09df7e49940
	I0918 19:19:01.963017  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:01.963251  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"310","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I0918 19:19:02.460457  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:02.460482  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:02.460492  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:02.460500  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:02.463118  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:02.463145  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:02.463154  712152 round_trippers.go:580]     Audit-Id: bb87639c-7cd7-4300-aeca-70bd1c8f7956
	I0918 19:19:02.463160  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:02.463166  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:02.463172  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:02.463179  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:02.463186  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:02 GMT
	I0918 19:19:02.463322  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"400","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0918 19:19:02.463729  712152 node_ready.go:49] node "multinode-689235" has status "Ready":"True"
	I0918 19:19:02.463748  712152 node_ready.go:38] duration metric: took 31.519239206s waiting for node "multinode-689235" to be "Ready" ...
	I0918 19:19:02.463772  712152 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 19:19:02.463880  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0918 19:19:02.463888  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:02.463896  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:02.463903  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:02.467746  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:19:02.467833  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:02.467842  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:02.467849  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:02 GMT
	I0918 19:19:02.467855  712152 round_trippers.go:580]     Audit-Id: 236e7feb-83be-4f68-8931-53f498c3349c
	I0918 19:19:02.467862  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:02.467873  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:02.467883  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:02.468259  712152 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"406"},"items":[{"metadata":{"name":"coredns-5dd5756b68-52fpx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d643472b-4be9-4a29-bf6a-e83171d46b1c","resourceVersion":"406","creationTimestamp":"2023-09-18T19:18:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"50b21e75-4c2c-4915-bb6e-5bee1d42dabc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:18:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"50b21e75-4c2c-4915-bb6e-5bee1d42dabc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55534 chars]
	I0918 19:19:02.472284  712152 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-52fpx" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:02.472379  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-52fpx
	I0918 19:19:02.472391  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:02.472400  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:02.472410  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:02.474902  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:02.474921  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:02.474931  712152 round_trippers.go:580]     Audit-Id: 0352c447-f241-4feb-915a-d06df6236b77
	I0918 19:19:02.474938  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:02.474944  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:02.474954  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:02.474968  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:02.474975  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:02 GMT
	I0918 19:19:02.475295  712152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-52fpx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d643472b-4be9-4a29-bf6a-e83171d46b1c","resourceVersion":"406","creationTimestamp":"2023-09-18T19:18:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"50b21e75-4c2c-4915-bb6e-5bee1d42dabc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:18:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"50b21e75-4c2c-4915-bb6e-5bee1d42dabc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0918 19:19:02.475809  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:02.475820  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:02.475829  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:02.475849  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:02.478141  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:02.478156  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:02.478164  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:02.478170  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:02 GMT
	I0918 19:19:02.478176  712152 round_trippers.go:580]     Audit-Id: 6ffea039-2273-47b4-acae-fd9a40849870
	I0918 19:19:02.478183  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:02.478189  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:02.478195  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:02.478306  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"400","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0918 19:19:02.478727  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-52fpx
	I0918 19:19:02.478735  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:02.478742  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:02.478749  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:02.481035  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:02.481052  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:02.481060  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:02.481066  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:02.481072  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:02.481079  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:02.481085  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:02 GMT
	I0918 19:19:02.481095  712152 round_trippers.go:580]     Audit-Id: e2518287-9f61-43b2-930d-900266552ae9
	I0918 19:19:02.481335  712152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-52fpx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d643472b-4be9-4a29-bf6a-e83171d46b1c","resourceVersion":"406","creationTimestamp":"2023-09-18T19:18:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"50b21e75-4c2c-4915-bb6e-5bee1d42dabc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:18:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"50b21e75-4c2c-4915-bb6e-5bee1d42dabc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0918 19:19:02.481885  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:02.481905  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:02.481913  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:02.481927  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:02.484402  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:02.484423  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:02.484431  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:02.484438  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:02 GMT
	I0918 19:19:02.484444  712152 round_trippers.go:580]     Audit-Id: 0fa5435d-9d59-4974-bf63-3c111a9159b7
	I0918 19:19:02.484450  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:02.484456  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:02.484462  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:02.484573  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"400","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0918 19:19:02.985736  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-52fpx
	I0918 19:19:02.985760  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:02.985770  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:02.985778  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:02.988540  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:02.988573  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:02.988582  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:02.988589  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:02.988601  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:02.988608  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:02.988617  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:02 GMT
	I0918 19:19:02.988624  712152 round_trippers.go:580]     Audit-Id: de1745a6-2405-40f8-86df-78bbf0fb5ff7
	I0918 19:19:02.988771  712152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-52fpx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d643472b-4be9-4a29-bf6a-e83171d46b1c","resourceVersion":"406","creationTimestamp":"2023-09-18T19:18:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"50b21e75-4c2c-4915-bb6e-5bee1d42dabc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:18:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"50b21e75-4c2c-4915-bb6e-5bee1d42dabc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0918 19:19:02.989357  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:02.989375  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:02.989388  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:02.989395  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:02.991993  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:02.992019  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:02.992028  712152 round_trippers.go:580]     Audit-Id: 69713f64-2e2d-42e1-9df7-ea48bf21cac9
	I0918 19:19:02.992034  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:02.992041  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:02.992047  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:02.992054  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:02.992060  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:02 GMT
	I0918 19:19:02.992202  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"400","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0918 19:19:03.485212  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-52fpx
	I0918 19:19:03.485235  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:03.485244  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:03.485252  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:03.487899  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:03.487919  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:03.487927  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:03.487934  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:03 GMT
	I0918 19:19:03.487940  712152 round_trippers.go:580]     Audit-Id: 17a44940-ad92-4ecd-b1c4-65bcf7eceae6
	I0918 19:19:03.487946  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:03.487952  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:03.487959  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:03.488136  712152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-52fpx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d643472b-4be9-4a29-bf6a-e83171d46b1c","resourceVersion":"419","creationTimestamp":"2023-09-18T19:18:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"50b21e75-4c2c-4915-bb6e-5bee1d42dabc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:18:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"50b21e75-4c2c-4915-bb6e-5bee1d42dabc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0918 19:19:03.488728  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:03.488742  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:03.488750  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:03.488761  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:03.491199  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:03.491219  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:03.491228  712152 round_trippers.go:580]     Audit-Id: 989900b4-aad1-4810-a3c9-a04eb117a6fe
	I0918 19:19:03.491235  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:03.491241  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:03.491248  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:03.491255  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:03.491265  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:03 GMT
	I0918 19:19:03.491509  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"400","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0918 19:19:03.491950  712152 pod_ready.go:92] pod "coredns-5dd5756b68-52fpx" in "kube-system" namespace has status "Ready":"True"
	I0918 19:19:03.491963  712152 pod_ready.go:81] duration metric: took 1.019644201s waiting for pod "coredns-5dd5756b68-52fpx" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:03.491973  712152 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-689235" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:03.492030  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-689235
	I0918 19:19:03.492038  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:03.492046  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:03.492053  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:03.494447  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:03.494464  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:03.494472  712152 round_trippers.go:580]     Audit-Id: 61d9eb22-81ca-41ef-a999-44aa730a580a
	I0918 19:19:03.494479  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:03.494527  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:03.494550  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:03.494557  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:03.494563  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:03 GMT
	I0918 19:19:03.494694  712152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-689235","namespace":"kube-system","uid":"1bc456e1-2455-4466-8f8f-6e27f3e804f2","resourceVersion":"387","creationTimestamp":"2023-09-18T19:18:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"58704de8334e799fab0624e8a943846a","kubernetes.io/config.mirror":"58704de8334e799fab0624e8a943846a","kubernetes.io/config.seen":"2023-09-18T19:18:16.900580807Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:18:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0918 19:19:03.495150  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:03.495162  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:03.495170  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:03.495177  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:03.497431  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:03.497449  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:03.497457  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:03.497464  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:03 GMT
	I0918 19:19:03.497470  712152 round_trippers.go:580]     Audit-Id: 90f7af53-8bd6-45c7-a5ab-44e0db38e417
	I0918 19:19:03.497476  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:03.497482  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:03.497487  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:03.497629  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"400","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0918 19:19:03.498011  712152 pod_ready.go:92] pod "etcd-multinode-689235" in "kube-system" namespace has status "Ready":"True"
	I0918 19:19:03.498032  712152 pod_ready.go:81] duration metric: took 6.052658ms waiting for pod "etcd-multinode-689235" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:03.498046  712152 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-689235" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:03.498110  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-689235
	I0918 19:19:03.498118  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:03.498126  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:03.498133  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:03.500375  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:03.500425  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:03.500434  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:03.500440  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:03.500446  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:03.500452  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:03.500458  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:03 GMT
	I0918 19:19:03.500464  712152 round_trippers.go:580]     Audit-Id: f93f0c35-1577-4458-8036-caafd1ca597b
	I0918 19:19:03.500660  712152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-689235","namespace":"kube-system","uid":"8fd4d983-6d28-45c4-8701-40cca4fbe65a","resourceVersion":"390","creationTimestamp":"2023-09-18T19:18:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"8cbdf887d99a1fc14e5f027ff73e02fd","kubernetes.io/config.mirror":"8cbdf887d99a1fc14e5f027ff73e02fd","kubernetes.io/config.seen":"2023-09-18T19:18:08.282472686Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:18:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0918 19:19:03.501661  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:03.501714  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:03.501736  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:03.501755  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:03.505970  712152 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 19:19:03.505996  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:03.506005  712152 round_trippers.go:580]     Audit-Id: 4ea19627-f4a7-48f8-963e-f3a30ddb59cb
	I0918 19:19:03.506012  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:03.506018  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:03.506025  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:03.506034  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:03.506045  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:03 GMT
	I0918 19:19:03.506149  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"400","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0918 19:19:03.506535  712152 pod_ready.go:92] pod "kube-apiserver-multinode-689235" in "kube-system" namespace has status "Ready":"True"
	I0918 19:19:03.506550  712152 pod_ready.go:81] duration metric: took 8.493315ms waiting for pod "kube-apiserver-multinode-689235" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:03.506561  712152 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-689235" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:03.506622  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-689235
	I0918 19:19:03.506631  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:03.506648  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:03.506657  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:03.509374  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:03.509443  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:03.509456  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:03.509462  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:03.509469  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:03.509475  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:03 GMT
	I0918 19:19:03.509481  712152 round_trippers.go:580]     Audit-Id: c48be865-00cb-4099-9e7a-4183cef434e9
	I0918 19:19:03.509488  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:03.509639  712152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-689235","namespace":"kube-system","uid":"249188f1-89c0-4de2-b1fa-5d4ec581f882","resourceVersion":"389","creationTimestamp":"2023-09-18T19:18:17Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d981450df7224320f56b3d04a848ea78","kubernetes.io/config.mirror":"d981450df7224320f56b3d04a848ea78","kubernetes.io/config.seen":"2023-09-18T19:18:16.900573767Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:18:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0918 19:19:03.661006  712152 request.go:629] Waited for 150.836112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:03.661085  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:03.661098  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:03.661107  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:03.661117  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:03.664252  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:19:03.664336  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:03.664351  712152 round_trippers.go:580]     Audit-Id: 975d0d9d-a92d-4a0d-b5ff-9176f60a564a
	I0918 19:19:03.664362  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:03.664377  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:03.664395  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:03.664412  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:03.664424  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:03 GMT
	I0918 19:19:03.664880  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"400","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0918 19:19:03.665319  712152 pod_ready.go:92] pod "kube-controller-manager-multinode-689235" in "kube-system" namespace has status "Ready":"True"
	I0918 19:19:03.665339  712152 pod_ready.go:81] duration metric: took 158.766766ms waiting for pod "kube-controller-manager-multinode-689235" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:03.665355  712152 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fgvhl" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:03.860766  712152 request.go:629] Waited for 195.342037ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fgvhl
	I0918 19:19:03.860851  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fgvhl
	I0918 19:19:03.860858  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:03.860873  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:03.860885  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:03.863777  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:03.863843  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:03.863852  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:03.863861  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:03 GMT
	I0918 19:19:03.863868  712152 round_trippers.go:580]     Audit-Id: e8d3d6a9-921d-419c-8226-6d4d6897adeb
	I0918 19:19:03.863878  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:03.863894  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:03.863904  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:03.864111  712152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fgvhl","generateName":"kube-proxy-","namespace":"kube-system","uid":"aedacfda-e3d4-48ea-8612-a3a48c64a15d","resourceVersion":"381","creationTimestamp":"2023-09-18T19:18:30Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8848d295-fe10-4902-8477-fffd231f32ff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:18:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8848d295-fe10-4902-8477-fffd231f32ff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0918 19:19:04.061110  712152 request.go:629] Waited for 196.3633ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:04.061177  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:04.061185  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:04.061199  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:04.061206  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:04.064249  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:19:04.064332  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:04.064392  712152 round_trippers.go:580]     Audit-Id: 8c3b2971-69dc-460e-8279-b0359f6e4567
	I0918 19:19:04.064415  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:04.064428  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:04.064435  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:04.064443  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:04.064451  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:04 GMT
	I0918 19:19:04.064571  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"400","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0918 19:19:04.064992  712152 pod_ready.go:92] pod "kube-proxy-fgvhl" in "kube-system" namespace has status "Ready":"True"
	I0918 19:19:04.065008  712152 pod_ready.go:81] duration metric: took 399.646542ms waiting for pod "kube-proxy-fgvhl" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:04.065020  712152 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-689235" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:04.261324  712152 request.go:629] Waited for 196.236013ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-689235
	I0918 19:19:04.261391  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-689235
	I0918 19:19:04.261401  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:04.261410  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:04.261418  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:04.263943  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:04.264006  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:04.264027  712152 round_trippers.go:580]     Audit-Id: d1e8bd7e-08d7-4767-aac0-bd7d35d88b34
	I0918 19:19:04.264046  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:04.264078  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:04.264101  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:04.264119  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:04.264140  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:04 GMT
	I0918 19:19:04.264275  712152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-689235","namespace":"kube-system","uid":"59a3807d-aea7-4edd-a329-f208496dd249","resourceVersion":"388","creationTimestamp":"2023-09-18T19:18:17Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4cd8ef9e9e1c62bf0f4649ea7d8fab42","kubernetes.io/config.mirror":"4cd8ef9e9e1c62bf0f4649ea7d8fab42","kubernetes.io/config.seen":"2023-09-18T19:18:16.900579010Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:18:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0918 19:19:04.461000  712152 request.go:629] Waited for 196.226848ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:04.461083  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:04.461095  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:04.461104  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:04.461117  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:04.463622  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:04.463646  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:04.463654  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:04.463660  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:04.463667  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:04.463672  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:04 GMT
	I0918 19:19:04.463681  712152 round_trippers.go:580]     Audit-Id: 3685b263-3200-4e2b-88a7-c0e574678499
	I0918 19:19:04.463687  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:04.464015  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"400","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0918 19:19:04.464454  712152 pod_ready.go:92] pod "kube-scheduler-multinode-689235" in "kube-system" namespace has status "Ready":"True"
	I0918 19:19:04.464471  712152 pod_ready.go:81] duration metric: took 399.441635ms waiting for pod "kube-scheduler-multinode-689235" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:04.464484  712152 pod_ready.go:38] duration metric: took 2.000661508s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 19:19:04.464505  712152 api_server.go:52] waiting for apiserver process to appear ...
	I0918 19:19:04.464566  712152 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:19:04.478606  712152 command_runner.go:130] > 1253
	I0918 19:19:04.478645  712152 api_server.go:72] duration metric: took 34.080077316s to wait for apiserver process to appear ...
	I0918 19:19:04.478655  712152 api_server.go:88] waiting for apiserver healthz status ...
	I0918 19:19:04.478672  712152 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0918 19:19:04.488713  712152 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0918 19:19:04.488800  712152 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0918 19:19:04.488832  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:04.488848  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:04.488857  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:04.490038  712152 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0918 19:19:04.490058  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:04.490067  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:04.490075  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:04.490081  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:04.490102  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:04.490118  712152 round_trippers.go:580]     Content-Length: 263
	I0918 19:19:04.490125  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:04 GMT
	I0918 19:19:04.490133  712152 round_trippers.go:580]     Audit-Id: 4a9030ab-f805-4282-b20b-69262e0bb922
	I0918 19:19:04.490150  712152 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.2",
	  "gitCommit": "89a4ea3e1e4ddd7f7572286090359983e0387b2f",
	  "gitTreeState": "clean",
	  "buildDate": "2023-09-13T09:29:07Z",
	  "goVersion": "go1.20.8",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I0918 19:19:04.490252  712152 api_server.go:141] control plane version: v1.28.2
	I0918 19:19:04.490268  712152 api_server.go:131] duration metric: took 11.607541ms to wait for apiserver health ...
	I0918 19:19:04.490292  712152 system_pods.go:43] waiting for kube-system pods to appear ...
	I0918 19:19:04.660544  712152 request.go:629] Waited for 170.175407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0918 19:19:04.660628  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0918 19:19:04.660657  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:04.660669  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:04.660695  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:04.664594  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:19:04.664616  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:04.664626  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:04.664632  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:04 GMT
	I0918 19:19:04.664654  712152 round_trippers.go:580]     Audit-Id: 510ca05e-11ee-4380-bce3-5f7918ab0871
	I0918 19:19:04.664668  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:04.664674  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:04.664684  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:04.665113  712152 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"coredns-5dd5756b68-52fpx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d643472b-4be9-4a29-bf6a-e83171d46b1c","resourceVersion":"419","creationTimestamp":"2023-09-18T19:18:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"50b21e75-4c2c-4915-bb6e-5bee1d42dabc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:18:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"50b21e75-4c2c-4915-bb6e-5bee1d42dabc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0918 19:19:04.667549  712152 system_pods.go:59] 8 kube-system pods found
	I0918 19:19:04.667577  712152 system_pods.go:61] "coredns-5dd5756b68-52fpx" [d643472b-4be9-4a29-bf6a-e83171d46b1c] Running
	I0918 19:19:04.667583  712152 system_pods.go:61] "etcd-multinode-689235" [1bc456e1-2455-4466-8f8f-6e27f3e804f2] Running
	I0918 19:19:04.667588  712152 system_pods.go:61] "kindnet-5jgz2" [5e31cabf-9e1d-4835-ae0c-68154199c5f0] Running
	I0918 19:19:04.667594  712152 system_pods.go:61] "kube-apiserver-multinode-689235" [8fd4d983-6d28-45c4-8701-40cca4fbe65a] Running
	I0918 19:19:04.667600  712152 system_pods.go:61] "kube-controller-manager-multinode-689235" [249188f1-89c0-4de2-b1fa-5d4ec581f882] Running
	I0918 19:19:04.667606  712152 system_pods.go:61] "kube-proxy-fgvhl" [aedacfda-e3d4-48ea-8612-a3a48c64a15d] Running
	I0918 19:19:04.667611  712152 system_pods.go:61] "kube-scheduler-multinode-689235" [59a3807d-aea7-4edd-a329-f208496dd249] Running
	I0918 19:19:04.667616  712152 system_pods.go:61] "storage-provisioner" [e63a1107-d248-405b-b8a7-367a9a5682de] Running
	I0918 19:19:04.667628  712152 system_pods.go:74] duration metric: took 177.329566ms to wait for pod list to return data ...
	I0918 19:19:04.667644  712152 default_sa.go:34] waiting for default service account to be created ...
	I0918 19:19:04.861062  712152 request.go:629] Waited for 193.341537ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0918 19:19:04.861146  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0918 19:19:04.861152  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:04.861161  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:04.861173  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:04.864280  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:19:04.864307  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:04.864316  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:04.864322  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:04.864329  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:04.864335  712152 round_trippers.go:580]     Content-Length: 261
	I0918 19:19:04.864348  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:04 GMT
	I0918 19:19:04.864359  712152 round_trippers.go:580]     Audit-Id: ef050728-84b5-4405-8d1d-192d42056af6
	I0918 19:19:04.864384  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:04.864409  712152 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"3e3a3148-0e46-4b3c-a746-8215f80f65b2","resourceVersion":"303","creationTimestamp":"2023-09-18T19:18:29Z"}}]}
	I0918 19:19:04.864639  712152 default_sa.go:45] found service account: "default"
	I0918 19:19:04.864657  712152 default_sa.go:55] duration metric: took 197.007036ms for default service account to be created ...
	I0918 19:19:04.864667  712152 system_pods.go:116] waiting for k8s-apps to be running ...
	I0918 19:19:05.061093  712152 request.go:629] Waited for 196.361741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0918 19:19:05.061154  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0918 19:19:05.061159  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:05.061169  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:05.061180  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:05.064826  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:19:05.065062  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:05.065103  712152 round_trippers.go:580]     Audit-Id: fb3fc5d7-38cb-4115-a2d2-19566a9e81ea
	I0918 19:19:05.065156  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:05.065187  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:05.065195  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:05.065218  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:05.065231  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:05 GMT
	I0918 19:19:05.065684  712152 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"coredns-5dd5756b68-52fpx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d643472b-4be9-4a29-bf6a-e83171d46b1c","resourceVersion":"419","creationTimestamp":"2023-09-18T19:18:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"50b21e75-4c2c-4915-bb6e-5bee1d42dabc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:18:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"50b21e75-4c2c-4915-bb6e-5bee1d42dabc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55612 chars]
	I0918 19:19:05.068815  712152 system_pods.go:86] 8 kube-system pods found
	I0918 19:19:05.068856  712152 system_pods.go:89] "coredns-5dd5756b68-52fpx" [d643472b-4be9-4a29-bf6a-e83171d46b1c] Running
	I0918 19:19:05.068869  712152 system_pods.go:89] "etcd-multinode-689235" [1bc456e1-2455-4466-8f8f-6e27f3e804f2] Running
	I0918 19:19:05.068882  712152 system_pods.go:89] "kindnet-5jgz2" [5e31cabf-9e1d-4835-ae0c-68154199c5f0] Running
	I0918 19:19:05.068894  712152 system_pods.go:89] "kube-apiserver-multinode-689235" [8fd4d983-6d28-45c4-8701-40cca4fbe65a] Running
	I0918 19:19:05.068903  712152 system_pods.go:89] "kube-controller-manager-multinode-689235" [249188f1-89c0-4de2-b1fa-5d4ec581f882] Running
	I0918 19:19:05.068909  712152 system_pods.go:89] "kube-proxy-fgvhl" [aedacfda-e3d4-48ea-8612-a3a48c64a15d] Running
	I0918 19:19:05.068916  712152 system_pods.go:89] "kube-scheduler-multinode-689235" [59a3807d-aea7-4edd-a329-f208496dd249] Running
	I0918 19:19:05.068922  712152 system_pods.go:89] "storage-provisioner" [e63a1107-d248-405b-b8a7-367a9a5682de] Running
	I0918 19:19:05.068931  712152 system_pods.go:126] duration metric: took 204.260059ms to wait for k8s-apps to be running ...
	I0918 19:19:05.068940  712152 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 19:19:05.069022  712152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 19:19:05.084102  712152 system_svc.go:56] duration metric: took 15.149265ms WaitForService to wait for kubelet.
	I0918 19:19:05.084131  712152 kubeadm.go:581] duration metric: took 34.685564575s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0918 19:19:05.084155  712152 node_conditions.go:102] verifying NodePressure condition ...
	I0918 19:19:05.260518  712152 request.go:629] Waited for 176.263937ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0918 19:19:05.260583  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0918 19:19:05.260589  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:05.260602  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:05.260615  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:05.263270  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:05.263292  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:05.263300  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:05 GMT
	I0918 19:19:05.263307  712152 round_trippers.go:580]     Audit-Id: cab5ad81-5691-4506-9885-a6bec05c34a2
	I0918 19:19:05.263313  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:05.263319  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:05.263325  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:05.263331  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:05.263473  712152 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"425"},"items":[{"metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"400","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I0918 19:19:05.263967  712152 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0918 19:19:05.263986  712152 node_conditions.go:123] node cpu capacity is 2
	I0918 19:19:05.263997  712152 node_conditions.go:105] duration metric: took 179.837333ms to run NodePressure ...
	I0918 19:19:05.264008  712152 start.go:228] waiting for startup goroutines ...
	I0918 19:19:05.264015  712152 start.go:233] waiting for cluster config update ...
	I0918 19:19:05.264024  712152 start.go:242] writing updated cluster config ...
	I0918 19:19:05.267452  712152 out.go:177] 
	I0918 19:19:05.270336  712152 config.go:182] Loaded profile config "multinode-689235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0918 19:19:05.270429  712152 profile.go:148] Saving config to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/config.json ...
	I0918 19:19:05.273285  712152 out.go:177] * Starting worker node multinode-689235-m02 in cluster multinode-689235
	I0918 19:19:05.275601  712152 cache.go:122] Beginning downloading kic base image for docker with crio
	I0918 19:19:05.277842  712152 out.go:177] * Pulling base image ...
	I0918 19:19:05.280872  712152 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0918 19:19:05.280911  712152 cache.go:57] Caching tarball of preloaded images
	I0918 19:19:05.280968  712152 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I0918 19:19:05.281248  712152 preload.go:174] Found /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I0918 19:19:05.281293  712152 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on crio
	I0918 19:19:05.281424  712152 profile.go:148] Saving config to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/config.json ...
	I0918 19:19:05.302935  712152 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon, skipping pull
	I0918 19:19:05.302958  712152 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in daemon, skipping load
	I0918 19:19:05.302978  712152 cache.go:195] Successfully downloaded all kic artifacts
	I0918 19:19:05.303010  712152 start.go:365] acquiring machines lock for multinode-689235-m02: {Name:mk4cd2d85fefaaf671600fb9bc40d841c33fd13c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:19:05.303136  712152 start.go:369] acquired machines lock for "multinode-689235-m02" in 109.129µs
	I0918 19:19:05.303297  712152 start.go:93] Provisioning new machine with config: &{Name:multinode-689235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-689235 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0918 19:19:05.303394  712152 start.go:125] createHost starting for "m02" (driver="docker")
	I0918 19:19:05.306332  712152 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0918 19:19:05.306466  712152 start.go:159] libmachine.API.Create for "multinode-689235" (driver="docker")
	I0918 19:19:05.306501  712152 client.go:168] LocalClient.Create starting
	I0918 19:19:05.306581  712152 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem
	I0918 19:19:05.306621  712152 main.go:141] libmachine: Decoding PEM data...
	I0918 19:19:05.306637  712152 main.go:141] libmachine: Parsing certificate...
	I0918 19:19:05.306695  712152 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem
	I0918 19:19:05.306712  712152 main.go:141] libmachine: Decoding PEM data...
	I0918 19:19:05.306723  712152 main.go:141] libmachine: Parsing certificate...
	I0918 19:19:05.307003  712152 cli_runner.go:164] Run: docker network inspect multinode-689235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 19:19:05.325969  712152 network_create.go:76] Found existing network {name:multinode-689235 subnet:0x40008ce4b0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0918 19:19:05.326017  712152 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-689235-m02" container
	I0918 19:19:05.326090  712152 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0918 19:19:05.345015  712152 cli_runner.go:164] Run: docker volume create multinode-689235-m02 --label name.minikube.sigs.k8s.io=multinode-689235-m02 --label created_by.minikube.sigs.k8s.io=true
	I0918 19:19:05.363964  712152 oci.go:103] Successfully created a docker volume multinode-689235-m02
	I0918 19:19:05.364058  712152 cli_runner.go:164] Run: docker run --rm --name multinode-689235-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-689235-m02 --entrypoint /usr/bin/test -v multinode-689235-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -d /var/lib
	I0918 19:19:05.985576  712152 oci.go:107] Successfully prepared a docker volume multinode-689235-m02
	I0918 19:19:05.985613  712152 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0918 19:19:05.985634  712152 kic.go:190] Starting extracting preloaded images to volume ...
	I0918 19:19:05.985719  712152 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-689235-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir
	I0918 19:19:10.314693  712152 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-689235-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 -I lz4 -xf /preloaded.tar -C /extractDir: (4.328925664s)
	I0918 19:19:10.314727  712152 kic.go:199] duration metric: took 4.329089 seconds to extract preloaded images to volume
	W0918 19:19:10.314881  712152 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0918 19:19:10.314990  712152 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0918 19:19:10.382759  712152 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-689235-m02 --name multinode-689235-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-689235-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-689235-m02 --network multinode-689235 --ip 192.168.58.3 --volume multinode-689235-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3
	I0918 19:19:10.760037  712152 cli_runner.go:164] Run: docker container inspect multinode-689235-m02 --format={{.State.Running}}
	I0918 19:19:10.784501  712152 cli_runner.go:164] Run: docker container inspect multinode-689235-m02 --format={{.State.Status}}
	I0918 19:19:10.815053  712152 cli_runner.go:164] Run: docker exec multinode-689235-m02 stat /var/lib/dpkg/alternatives/iptables
	I0918 19:19:10.881940  712152 oci.go:144] the created container "multinode-689235-m02" has a running status.
	I0918 19:19:10.881965  712152 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235-m02/id_rsa...
	I0918 19:19:11.863107  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0918 19:19:11.863172  712152 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0918 19:19:11.905331  712152 cli_runner.go:164] Run: docker container inspect multinode-689235-m02 --format={{.State.Status}}
	I0918 19:19:11.938640  712152 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0918 19:19:11.938664  712152 kic_runner.go:114] Args: [docker exec --privileged multinode-689235-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0918 19:19:12.049842  712152 cli_runner.go:164] Run: docker container inspect multinode-689235-m02 --format={{.State.Status}}
	I0918 19:19:12.080758  712152 machine.go:88] provisioning docker machine ...
	I0918 19:19:12.080790  712152 ubuntu.go:169] provisioning hostname "multinode-689235-m02"
	I0918 19:19:12.080863  712152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235-m02
	I0918 19:19:12.115601  712152 main.go:141] libmachine: Using SSH client type: native
	I0918 19:19:12.116415  712152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I0918 19:19:12.116438  712152 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-689235-m02 && echo "multinode-689235-m02" | sudo tee /etc/hostname
	I0918 19:19:12.287852  712152 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-689235-m02
	
	I0918 19:19:12.287930  712152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235-m02
	I0918 19:19:12.308873  712152 main.go:141] libmachine: Using SSH client type: native
	I0918 19:19:12.309295  712152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I0918 19:19:12.309317  712152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-689235-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-689235-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-689235-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 19:19:12.449009  712152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 19:19:12.449033  712152 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17263-642665/.minikube CaCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17263-642665/.minikube}
	I0918 19:19:12.449049  712152 ubuntu.go:177] setting up certificates
	I0918 19:19:12.449057  712152 provision.go:83] configureAuth start
	I0918 19:19:12.449118  712152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-689235-m02
	I0918 19:19:12.470168  712152 provision.go:138] copyHostCerts
	I0918 19:19:12.470205  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem
	I0918 19:19:12.470235  712152 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem, removing ...
	I0918 19:19:12.470241  712152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem
	I0918 19:19:12.470320  712152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem (1123 bytes)
	I0918 19:19:12.470403  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem
	I0918 19:19:12.470421  712152 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem, removing ...
	I0918 19:19:12.470425  712152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem
	I0918 19:19:12.470451  712152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem (1675 bytes)
	I0918 19:19:12.470494  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem
	I0918 19:19:12.470510  712152 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem, removing ...
	I0918 19:19:12.470516  712152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem
	I0918 19:19:12.470539  712152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem (1082 bytes)
	I0918 19:19:12.470581  712152 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem org=jenkins.multinode-689235-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-689235-m02]
	I0918 19:19:12.946157  712152 provision.go:172] copyRemoteCerts
	I0918 19:19:12.946227  712152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 19:19:12.946269  712152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235-m02
	I0918 19:19:12.964332  712152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235-m02/id_rsa Username:docker}
	I0918 19:19:13.068242  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0918 19:19:13.068320  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 19:19:13.098944  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0918 19:19:13.099032  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0918 19:19:13.129093  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0918 19:19:13.129210  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 19:19:13.159235  712152 provision.go:86] duration metric: configureAuth took 710.162655ms
	I0918 19:19:13.159260  712152 ubuntu.go:193] setting minikube options for container-runtime
	I0918 19:19:13.159473  712152 config.go:182] Loaded profile config "multinode-689235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0918 19:19:13.159573  712152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235-m02
	I0918 19:19:13.178386  712152 main.go:141] libmachine: Using SSH client type: native
	I0918 19:19:13.178792  712152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33495 <nil> <nil>}
	I0918 19:19:13.178815  712152 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 19:19:13.438314  712152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 19:19:13.438340  712152 machine.go:91] provisioned docker machine in 1.35755847s
	I0918 19:19:13.438351  712152 client.go:171] LocalClient.Create took 8.13184508s
	I0918 19:19:13.438368  712152 start.go:167] duration metric: libmachine.API.Create for "multinode-689235" took 8.13190346s
	I0918 19:19:13.438377  712152 start.go:300] post-start starting for "multinode-689235-m02" (driver="docker")
	I0918 19:19:13.438391  712152 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 19:19:13.438459  712152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 19:19:13.438504  712152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235-m02
	I0918 19:19:13.457366  712152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235-m02/id_rsa Username:docker}
	I0918 19:19:13.559151  712152 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 19:19:13.563386  712152 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0918 19:19:13.563405  712152 command_runner.go:130] > NAME="Ubuntu"
	I0918 19:19:13.563412  712152 command_runner.go:130] > VERSION_ID="22.04"
	I0918 19:19:13.563419  712152 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0918 19:19:13.563425  712152 command_runner.go:130] > VERSION_CODENAME=jammy
	I0918 19:19:13.563430  712152 command_runner.go:130] > ID=ubuntu
	I0918 19:19:13.563434  712152 command_runner.go:130] > ID_LIKE=debian
	I0918 19:19:13.563440  712152 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0918 19:19:13.563453  712152 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0918 19:19:13.563465  712152 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0918 19:19:13.563478  712152 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0918 19:19:13.563487  712152 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0918 19:19:13.563544  712152 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0918 19:19:13.563572  712152 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0918 19:19:13.563587  712152 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0918 19:19:13.563594  712152 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0918 19:19:13.563608  712152 filesync.go:126] Scanning /home/jenkins/minikube-integration/17263-642665/.minikube/addons for local assets ...
	I0918 19:19:13.563675  712152 filesync.go:126] Scanning /home/jenkins/minikube-integration/17263-642665/.minikube/files for local assets ...
	I0918 19:19:13.563757  712152 filesync.go:149] local asset: /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem -> 6480032.pem in /etc/ssl/certs
	I0918 19:19:13.563768  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem -> /etc/ssl/certs/6480032.pem
	I0918 19:19:13.563899  712152 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 19:19:13.574697  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem --> /etc/ssl/certs/6480032.pem (1708 bytes)
	I0918 19:19:13.605507  712152 start.go:303] post-start completed in 167.111449ms
	I0918 19:19:13.605867  712152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-689235-m02
	I0918 19:19:13.624450  712152 profile.go:148] Saving config to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/config.json ...
	I0918 19:19:13.624748  712152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 19:19:13.624797  712152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235-m02
	I0918 19:19:13.643673  712152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235-m02/id_rsa Username:docker}
	I0918 19:19:13.737946  712152 command_runner.go:130] > 14%!
	(MISSING)I0918 19:19:13.738036  712152 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0918 19:19:13.744162  712152 command_runner.go:130] > 169G
	I0918 19:19:13.744422  712152 start.go:128] duration metric: createHost completed in 8.441016073s
	I0918 19:19:13.744445  712152 start.go:83] releasing machines lock for "multinode-689235-m02", held for 8.441296599s
	I0918 19:19:13.744567  712152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-689235-m02
	I0918 19:19:13.768038  712152 out.go:177] * Found network options:
	I0918 19:19:13.770194  712152 out.go:177]   - NO_PROXY=192.168.58.2
	W0918 19:19:13.772695  712152 proxy.go:119] fail to check proxy env: Error ip not in block
	W0918 19:19:13.772739  712152 proxy.go:119] fail to check proxy env: Error ip not in block
	I0918 19:19:13.772809  712152 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 19:19:13.772869  712152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235-m02
	I0918 19:19:13.773174  712152 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 19:19:13.773245  712152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235-m02
	I0918 19:19:13.799467  712152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235-m02/id_rsa Username:docker}
	I0918 19:19:13.809431  712152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235-m02/id_rsa Username:docker}
	I0918 19:19:14.060532  712152 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0918 19:19:14.060607  712152 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0918 19:19:14.066411  712152 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0918 19:19:14.066435  712152 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0918 19:19:14.066447  712152 command_runner.go:130] > Device: b3h/179d	Inode: 1304403     Links: 1
	I0918 19:19:14.066455  712152 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0918 19:19:14.066462  712152 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0918 19:19:14.066468  712152 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0918 19:19:14.066475  712152 command_runner.go:130] > Change: 2023-09-18 18:55:15.404659531 +0000
	I0918 19:19:14.066485  712152 command_runner.go:130] >  Birth: 2023-09-18 18:55:15.404659531 +0000
	I0918 19:19:14.066835  712152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 19:19:14.093835  712152 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0918 19:19:14.093917  712152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 19:19:14.138380  712152 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0918 19:19:14.138429  712152 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0918 19:19:14.138438  712152 start.go:469] detecting cgroup driver to use...
	I0918 19:19:14.138469  712152 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0918 19:19:14.138526  712152 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 19:19:14.161341  712152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 19:19:14.176735  712152 docker.go:196] disabling cri-docker service (if available) ...
	I0918 19:19:14.176805  712152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 19:19:14.194198  712152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 19:19:14.212904  712152 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0918 19:19:14.309503  712152 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 19:19:14.414676  712152 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0918 19:19:14.414759  712152 docker.go:212] disabling docker service ...
	I0918 19:19:14.414886  712152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 19:19:14.437932  712152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 19:19:14.452154  712152 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 19:19:14.550303  712152 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0918 19:19:14.550375  712152 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 19:19:14.663017  712152 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0918 19:19:14.663161  712152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 19:19:14.680375  712152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 19:19:14.700506  712152 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0918 19:19:14.702396  712152 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0918 19:19:14.702468  712152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:19:14.715159  712152 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0918 19:19:14.715270  712152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:19:14.727207  712152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:19:14.740244  712152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:19:14.753891  712152 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0918 19:19:14.766356  712152 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0918 19:19:14.775820  712152 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0918 19:19:14.777161  712152 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0918 19:19:14.787674  712152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0918 19:19:14.881422  712152 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0918 19:19:15.040480  712152 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0918 19:19:15.040644  712152 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0918 19:19:15.046596  712152 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0918 19:19:15.046677  712152 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0918 19:19:15.046700  712152 command_runner.go:130] > Device: bch/188d	Inode: 190         Links: 1
	I0918 19:19:15.046726  712152 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0918 19:19:15.046770  712152 command_runner.go:130] > Access: 2023-09-18 19:19:14.998527363 +0000
	I0918 19:19:15.046808  712152 command_runner.go:130] > Modify: 2023-09-18 19:19:14.998527363 +0000
	I0918 19:19:15.046890  712152 command_runner.go:130] > Change: 2023-09-18 19:19:14.998527363 +0000
	I0918 19:19:15.047077  712152 command_runner.go:130] >  Birth: -
	I0918 19:19:15.047129  712152 start.go:537] Will wait 60s for crictl version
	I0918 19:19:15.047291  712152 ssh_runner.go:195] Run: which crictl
	I0918 19:19:15.053519  712152 command_runner.go:130] > /usr/bin/crictl
	I0918 19:19:15.053710  712152 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0918 19:19:15.114238  712152 command_runner.go:130] > Version:  0.1.0
	I0918 19:19:15.114634  712152 command_runner.go:130] > RuntimeName:  cri-o
	I0918 19:19:15.114984  712152 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0918 19:19:15.115275  712152 command_runner.go:130] > RuntimeApiVersion:  v1
	I0918 19:19:15.118893  712152 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0918 19:19:15.119071  712152 ssh_runner.go:195] Run: crio --version
	I0918 19:19:15.180786  712152 command_runner.go:130] > crio version 1.24.6
	I0918 19:19:15.180863  712152 command_runner.go:130] > Version:          1.24.6
	I0918 19:19:15.180893  712152 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0918 19:19:15.180912  712152 command_runner.go:130] > GitTreeState:     clean
	I0918 19:19:15.180948  712152 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0918 19:19:15.180972  712152 command_runner.go:130] > GoVersion:        go1.18.2
	I0918 19:19:15.180992  712152 command_runner.go:130] > Compiler:         gc
	I0918 19:19:15.181036  712152 command_runner.go:130] > Platform:         linux/arm64
	I0918 19:19:15.181066  712152 command_runner.go:130] > Linkmode:         dynamic
	I0918 19:19:15.181113  712152 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0918 19:19:15.181144  712152 command_runner.go:130] > SeccompEnabled:   true
	I0918 19:19:15.181163  712152 command_runner.go:130] > AppArmorEnabled:  false
	I0918 19:19:15.183767  712152 ssh_runner.go:195] Run: crio --version
	I0918 19:19:15.240538  712152 command_runner.go:130] > crio version 1.24.6
	I0918 19:19:15.240614  712152 command_runner.go:130] > Version:          1.24.6
	I0918 19:19:15.240638  712152 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0918 19:19:15.240658  712152 command_runner.go:130] > GitTreeState:     clean
	I0918 19:19:15.240691  712152 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0918 19:19:15.240714  712152 command_runner.go:130] > GoVersion:        go1.18.2
	I0918 19:19:15.240734  712152 command_runner.go:130] > Compiler:         gc
	I0918 19:19:15.240769  712152 command_runner.go:130] > Platform:         linux/arm64
	I0918 19:19:15.240797  712152 command_runner.go:130] > Linkmode:         dynamic
	I0918 19:19:15.240821  712152 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0918 19:19:15.240857  712152 command_runner.go:130] > SeccompEnabled:   true
	I0918 19:19:15.240881  712152 command_runner.go:130] > AppArmorEnabled:  false
	I0918 19:19:15.244829  712152 out.go:177] * Preparing Kubernetes v1.28.2 on CRI-O 1.24.6 ...
	I0918 19:19:15.247113  712152 out.go:177]   - env NO_PROXY=192.168.58.2
	I0918 19:19:15.249436  712152 cli_runner.go:164] Run: docker network inspect multinode-689235 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 19:19:15.268038  712152 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0918 19:19:15.272958  712152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 19:19:15.287606  712152 certs.go:56] Setting up /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235 for IP: 192.168.58.3
	I0918 19:19:15.287639  712152 certs.go:190] acquiring lock for shared ca certs: {Name:mkb16b377708c2d983623434e9d896d9d8fd7133 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0918 19:19:15.287808  712152 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.key
	I0918 19:19:15.287857  712152 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.key
	I0918 19:19:15.287877  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0918 19:19:15.287894  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0918 19:19:15.287913  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0918 19:19:15.287929  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0918 19:19:15.288001  712152 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/648003.pem (1338 bytes)
	W0918 19:19:15.288039  712152 certs.go:433] ignoring /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/648003_empty.pem, impossibly tiny 0 bytes
	I0918 19:19:15.288054  712152 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem (1675 bytes)
	I0918 19:19:15.288081  712152 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem (1082 bytes)
	I0918 19:19:15.288112  712152 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem (1123 bytes)
	I0918 19:19:15.288139  712152 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem (1675 bytes)
	I0918 19:19:15.288185  712152 certs.go:437] found cert: /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem (1708 bytes)
	I0918 19:19:15.288251  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem -> /usr/share/ca-certificates/6480032.pem
	I0918 19:19:15.288270  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:19:15.288283  712152 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/648003.pem -> /usr/share/ca-certificates/648003.pem
	I0918 19:19:15.288702  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0918 19:19:15.321746  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0918 19:19:15.352300  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0918 19:19:15.382503  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0918 19:19:15.413593  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem --> /usr/share/ca-certificates/6480032.pem (1708 bytes)
	I0918 19:19:15.445530  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0918 19:19:15.475840  712152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/certs/648003.pem --> /usr/share/ca-certificates/648003.pem (1338 bytes)
	I0918 19:19:15.505680  712152 ssh_runner.go:195] Run: openssl version
	I0918 19:19:15.512717  712152 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0918 19:19:15.513031  712152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6480032.pem && ln -fs /usr/share/ca-certificates/6480032.pem /etc/ssl/certs/6480032.pem"
	I0918 19:19:15.526037  712152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6480032.pem
	I0918 19:19:15.531216  712152 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 18 19:02 /usr/share/ca-certificates/6480032.pem
	I0918 19:19:15.531250  712152 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 18 19:02 /usr/share/ca-certificates/6480032.pem
	I0918 19:19:15.531305  712152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6480032.pem
	I0918 19:19:15.539742  712152 command_runner.go:130] > 3ec20f2e
	I0918 19:19:15.540231  712152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6480032.pem /etc/ssl/certs/3ec20f2e.0"
	I0918 19:19:15.552256  712152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0918 19:19:15.564545  712152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:19:15.569665  712152 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 18 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:19:15.569887  712152 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 18 18:55 /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:19:15.569988  712152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0918 19:19:15.578603  712152 command_runner.go:130] > b5213941
	I0918 19:19:15.579161  712152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0918 19:19:15.591587  712152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/648003.pem && ln -fs /usr/share/ca-certificates/648003.pem /etc/ssl/certs/648003.pem"
	I0918 19:19:15.603640  712152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/648003.pem
	I0918 19:19:15.608395  712152 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 18 19:02 /usr/share/ca-certificates/648003.pem
	I0918 19:19:15.608422  712152 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 18 19:02 /usr/share/ca-certificates/648003.pem
	I0918 19:19:15.608473  712152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/648003.pem
	I0918 19:19:15.616749  712152 command_runner.go:130] > 51391683
	I0918 19:19:15.617110  712152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/648003.pem /etc/ssl/certs/51391683.0"
	I0918 19:19:15.629108  712152 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0918 19:19:15.633918  712152 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0918 19:19:15.633977  712152 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0918 19:19:15.634096  712152 ssh_runner.go:195] Run: crio config
	I0918 19:19:15.689765  712152 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0918 19:19:15.689792  712152 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0918 19:19:15.689802  712152 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0918 19:19:15.689807  712152 command_runner.go:130] > #
	I0918 19:19:15.689824  712152 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0918 19:19:15.689832  712152 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0918 19:19:15.689840  712152 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0918 19:19:15.689853  712152 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0918 19:19:15.689858  712152 command_runner.go:130] > # reload'.
	I0918 19:19:15.689870  712152 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0918 19:19:15.689878  712152 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0918 19:19:15.689895  712152 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0918 19:19:15.689905  712152 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0918 19:19:15.689910  712152 command_runner.go:130] > [crio]
	I0918 19:19:15.689918  712152 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0918 19:19:15.689926  712152 command_runner.go:130] > # containers images, in this directory.
	I0918 19:19:15.689937  712152 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0918 19:19:15.689948  712152 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0918 19:19:15.690173  712152 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0918 19:19:15.690191  712152 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0918 19:19:15.690210  712152 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0918 19:19:15.690452  712152 command_runner.go:130] > # storage_driver = "vfs"
	I0918 19:19:15.690469  712152 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0918 19:19:15.690477  712152 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0918 19:19:15.690489  712152 command_runner.go:130] > # storage_option = [
	I0918 19:19:15.690498  712152 command_runner.go:130] > # ]
	I0918 19:19:15.690506  712152 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0918 19:19:15.690525  712152 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0918 19:19:15.690534  712152 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0918 19:19:15.690542  712152 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0918 19:19:15.690553  712152 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0918 19:19:15.690559  712152 command_runner.go:130] > # always happen on a node reboot
	I0918 19:19:15.690565  712152 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0918 19:19:15.690577  712152 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0918 19:19:15.690585  712152 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0918 19:19:15.690602  712152 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0918 19:19:15.690612  712152 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0918 19:19:15.690622  712152 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0918 19:19:15.690635  712152 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0918 19:19:15.690645  712152 command_runner.go:130] > # internal_wipe = true
	I0918 19:19:15.690652  712152 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0918 19:19:15.690670  712152 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0918 19:19:15.690678  712152 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0918 19:19:15.690690  712152 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0918 19:19:15.690698  712152 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0918 19:19:15.690706  712152 command_runner.go:130] > [crio.api]
	I0918 19:19:15.690714  712152 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0918 19:19:15.690723  712152 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0918 19:19:15.690730  712152 command_runner.go:130] > # IP address on which the stream server will listen.
	I0918 19:19:15.690736  712152 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0918 19:19:15.690753  712152 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0918 19:19:15.690760  712152 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0918 19:19:15.690767  712152 command_runner.go:130] > # stream_port = "0"
	I0918 19:19:15.690774  712152 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0918 19:19:15.690782  712152 command_runner.go:130] > # stream_enable_tls = false
	I0918 19:19:15.690790  712152 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0918 19:19:15.690798  712152 command_runner.go:130] > # stream_idle_timeout = ""
	I0918 19:19:15.690807  712152 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0918 19:19:15.690835  712152 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0918 19:19:15.690846  712152 command_runner.go:130] > # minutes.
	I0918 19:19:15.690855  712152 command_runner.go:130] > # stream_tls_cert = ""
	I0918 19:19:15.690865  712152 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0918 19:19:15.690877  712152 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0918 19:19:15.690885  712152 command_runner.go:130] > # stream_tls_key = ""
	I0918 19:19:15.690901  712152 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0918 19:19:15.690912  712152 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0918 19:19:15.690919  712152 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0918 19:19:15.690928  712152 command_runner.go:130] > # stream_tls_ca = ""
	I0918 19:19:15.690937  712152 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0918 19:19:15.690947  712152 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0918 19:19:15.690956  712152 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0918 19:19:15.690964  712152 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0918 19:19:15.691004  712152 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0918 19:19:15.691017  712152 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0918 19:19:15.691022  712152 command_runner.go:130] > [crio.runtime]
	I0918 19:19:15.691030  712152 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0918 19:19:15.691040  712152 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0918 19:19:15.691053  712152 command_runner.go:130] > # "nofile=1024:2048"
	I0918 19:19:15.691064  712152 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0918 19:19:15.691070  712152 command_runner.go:130] > # default_ulimits = [
	I0918 19:19:15.691078  712152 command_runner.go:130] > # ]
	I0918 19:19:15.691088  712152 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0918 19:19:15.691096  712152 command_runner.go:130] > # no_pivot = false
	I0918 19:19:15.691103  712152 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0918 19:19:15.691111  712152 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0918 19:19:15.691126  712152 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0918 19:19:15.691134  712152 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0918 19:19:15.691145  712152 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0918 19:19:15.691153  712152 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0918 19:19:15.691160  712152 command_runner.go:130] > # conmon = ""
	I0918 19:19:15.691170  712152 command_runner.go:130] > # Cgroup setting for conmon
	I0918 19:19:15.691179  712152 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0918 19:19:15.691187  712152 command_runner.go:130] > conmon_cgroup = "pod"
	I0918 19:19:15.691200  712152 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0918 19:19:15.691210  712152 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0918 19:19:15.691219  712152 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0918 19:19:15.691228  712152 command_runner.go:130] > # conmon_env = [
	I0918 19:19:15.691232  712152 command_runner.go:130] > # ]
	I0918 19:19:15.691239  712152 command_runner.go:130] > # Additional environment variables to set for all the
	I0918 19:19:15.691246  712152 command_runner.go:130] > # containers. These are overridden if set in the
	I0918 19:19:15.691258  712152 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0918 19:19:15.691263  712152 command_runner.go:130] > # default_env = [
	I0918 19:19:15.691274  712152 command_runner.go:130] > # ]
	I0918 19:19:15.691285  712152 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0918 19:19:15.691290  712152 command_runner.go:130] > # selinux = false
	I0918 19:19:15.691302  712152 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0918 19:19:15.691310  712152 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0918 19:19:15.691322  712152 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0918 19:19:15.691327  712152 command_runner.go:130] > # seccomp_profile = ""
	I0918 19:19:15.691339  712152 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0918 19:19:15.691352  712152 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0918 19:19:15.691362  712152 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0918 19:19:15.691369  712152 command_runner.go:130] > # which might increase security.
	I0918 19:19:15.691641  712152 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0918 19:19:15.691669  712152 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0918 19:19:15.691681  712152 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0918 19:19:15.691693  712152 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0918 19:19:15.691702  712152 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0918 19:19:15.691711  712152 command_runner.go:130] > # This option supports live configuration reload.
	I0918 19:19:15.691717  712152 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0918 19:19:15.691728  712152 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0918 19:19:15.691745  712152 command_runner.go:130] > # the cgroup blockio controller.
	I0918 19:19:15.691753  712152 command_runner.go:130] > # blockio_config_file = ""
	I0918 19:19:15.691762  712152 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0918 19:19:15.691767  712152 command_runner.go:130] > # irqbalance daemon.
	I0918 19:19:15.691775  712152 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0918 19:19:15.691812  712152 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0918 19:19:15.691819  712152 command_runner.go:130] > # This option supports live configuration reload.
	I0918 19:19:15.691827  712152 command_runner.go:130] > # rdt_config_file = ""
	I0918 19:19:15.691834  712152 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0918 19:19:15.691843  712152 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0918 19:19:15.691851  712152 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0918 19:19:15.691859  712152 command_runner.go:130] > # separate_pull_cgroup = ""
	I0918 19:19:15.691877  712152 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0918 19:19:15.691889  712152 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0918 19:19:15.691894  712152 command_runner.go:130] > # will be added.
	I0918 19:19:15.691904  712152 command_runner.go:130] > # default_capabilities = [
	I0918 19:19:15.691909  712152 command_runner.go:130] > # 	"CHOWN",
	I0918 19:19:15.691919  712152 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0918 19:19:15.691924  712152 command_runner.go:130] > # 	"FSETID",
	I0918 19:19:15.691928  712152 command_runner.go:130] > # 	"FOWNER",
	I0918 19:19:15.691938  712152 command_runner.go:130] > # 	"SETGID",
	I0918 19:19:15.691949  712152 command_runner.go:130] > # 	"SETUID",
	I0918 19:19:15.691957  712152 command_runner.go:130] > # 	"SETPCAP",
	I0918 19:19:15.691962  712152 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0918 19:19:15.691967  712152 command_runner.go:130] > # 	"KILL",
	I0918 19:19:15.692189  712152 command_runner.go:130] > # ]
	I0918 19:19:15.692208  712152 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0918 19:19:15.692227  712152 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0918 19:19:15.692238  712152 command_runner.go:130] > # add_inheritable_capabilities = true
	I0918 19:19:15.692246  712152 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0918 19:19:15.692258  712152 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0918 19:19:15.692264  712152 command_runner.go:130] > # default_sysctls = [
	I0918 19:19:15.692272  712152 command_runner.go:130] > # ]
	I0918 19:19:15.692278  712152 command_runner.go:130] > # List of devices on the host that a
	I0918 19:19:15.692286  712152 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0918 19:19:15.692300  712152 command_runner.go:130] > # allowed_devices = [
	I0918 19:19:15.692306  712152 command_runner.go:130] > # 	"/dev/fuse",
	I0918 19:19:15.692311  712152 command_runner.go:130] > # ]
	I0918 19:19:15.692319  712152 command_runner.go:130] > # List of additional devices. specified as
	I0918 19:19:15.692338  712152 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0918 19:19:15.692349  712152 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0918 19:19:15.692357  712152 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0918 19:19:15.692362  712152 command_runner.go:130] > # additional_devices = [
	I0918 19:19:15.692383  712152 command_runner.go:130] > # ]
	I0918 19:19:15.692393  712152 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0918 19:19:15.692398  712152 command_runner.go:130] > # cdi_spec_dirs = [
	I0918 19:19:15.692404  712152 command_runner.go:130] > # 	"/etc/cdi",
	I0918 19:19:15.692411  712152 command_runner.go:130] > # 	"/var/run/cdi",
	I0918 19:19:15.692416  712152 command_runner.go:130] > # ]
	I0918 19:19:15.692425  712152 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0918 19:19:15.692436  712152 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0918 19:19:15.692441  712152 command_runner.go:130] > # Defaults to false.
	I0918 19:19:15.692456  712152 command_runner.go:130] > # device_ownership_from_security_context = false
	I0918 19:19:15.692465  712152 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0918 19:19:15.692475  712152 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0918 19:19:15.692481  712152 command_runner.go:130] > # hooks_dir = [
	I0918 19:19:15.692490  712152 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0918 19:19:15.692495  712152 command_runner.go:130] > # ]
	I0918 19:19:15.692502  712152 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0918 19:19:15.692510  712152 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0918 19:19:15.692531  712152 command_runner.go:130] > # its default mounts from the following two files:
	I0918 19:19:15.692539  712152 command_runner.go:130] > #
	I0918 19:19:15.692547  712152 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0918 19:19:15.692559  712152 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0918 19:19:15.692567  712152 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0918 19:19:15.692575  712152 command_runner.go:130] > #
	I0918 19:19:15.692582  712152 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0918 19:19:15.692590  712152 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0918 19:19:15.692604  712152 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0918 19:19:15.692617  712152 command_runner.go:130] > #      only add mounts it finds in this file.
	I0918 19:19:15.692621  712152 command_runner.go:130] > #
	I0918 19:19:15.692633  712152 command_runner.go:130] > # default_mounts_file = ""
	I0918 19:19:15.692640  712152 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0918 19:19:15.692652  712152 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0918 19:19:15.692657  712152 command_runner.go:130] > # pids_limit = 0
	I0918 19:19:15.692665  712152 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0918 19:19:15.692682  712152 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0918 19:19:15.692690  712152 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0918 19:19:15.692705  712152 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0918 19:19:15.692714  712152 command_runner.go:130] > # log_size_max = -1
	I0918 19:19:15.692743  712152 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0918 19:19:15.692760  712152 command_runner.go:130] > # log_to_journald = false
	I0918 19:19:15.692769  712152 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0918 19:19:15.692775  712152 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0918 19:19:15.692782  712152 command_runner.go:130] > # Path to directory for container attach sockets.
	I0918 19:19:15.692793  712152 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0918 19:19:15.692800  712152 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0918 19:19:15.692810  712152 command_runner.go:130] > # bind_mount_prefix = ""
	I0918 19:19:15.692817  712152 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0918 19:19:15.692831  712152 command_runner.go:130] > # read_only = false
	I0918 19:19:15.692845  712152 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0918 19:19:15.692853  712152 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0918 19:19:15.692858  712152 command_runner.go:130] > # live configuration reload.
	I0918 19:19:15.692863  712152 command_runner.go:130] > # log_level = "info"
	I0918 19:19:15.692874  712152 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0918 19:19:15.692881  712152 command_runner.go:130] > # This option supports live configuration reload.
	I0918 19:19:15.692889  712152 command_runner.go:130] > # log_filter = ""
	I0918 19:19:15.692897  712152 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0918 19:19:15.692914  712152 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0918 19:19:15.692923  712152 command_runner.go:130] > # separated by comma.
	I0918 19:19:15.692928  712152 command_runner.go:130] > # uid_mappings = ""
	I0918 19:19:15.692936  712152 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0918 19:19:15.692948  712152 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0918 19:19:15.692956  712152 command_runner.go:130] > # separated by comma.
	I0918 19:19:15.692965  712152 command_runner.go:130] > # gid_mappings = ""
	I0918 19:19:15.692973  712152 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0918 19:19:15.692991  712152 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0918 19:19:15.692999  712152 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0918 19:19:15.693007  712152 command_runner.go:130] > # minimum_mappable_uid = -1
	I0918 19:19:15.693015  712152 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0918 19:19:15.693023  712152 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0918 19:19:15.693032  712152 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0918 19:19:15.693042  712152 command_runner.go:130] > # minimum_mappable_gid = -1
	I0918 19:19:15.693049  712152 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0918 19:19:15.693065  712152 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0918 19:19:15.693076  712152 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0918 19:19:15.693082  712152 command_runner.go:130] > # ctr_stop_timeout = 30
	I0918 19:19:15.693093  712152 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0918 19:19:15.693102  712152 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0918 19:19:15.693108  712152 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0918 19:19:15.693118  712152 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0918 19:19:15.693421  712152 command_runner.go:130] > # drop_infra_ctr = true
	I0918 19:19:15.693453  712152 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0918 19:19:15.693461  712152 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0918 19:19:15.693470  712152 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0918 19:19:15.693475  712152 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0918 19:19:15.693482  712152 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0918 19:19:15.693489  712152 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0918 19:19:15.693497  712152 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0918 19:19:15.693512  712152 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0918 19:19:15.693518  712152 command_runner.go:130] > # pinns_path = ""
	I0918 19:19:15.693532  712152 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0918 19:19:15.693540  712152 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0918 19:19:15.693552  712152 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0918 19:19:15.693558  712152 command_runner.go:130] > # default_runtime = "runc"
	I0918 19:19:15.693564  712152 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0918 19:19:15.693574  712152 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0918 19:19:15.693594  712152 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0918 19:19:15.693605  712152 command_runner.go:130] > # creation as a file is not desired either.
	I0918 19:19:15.693619  712152 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0918 19:19:15.693628  712152 command_runner.go:130] > # the hostname is being managed dynamically.
	I0918 19:19:15.693638  712152 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0918 19:19:15.693644  712152 command_runner.go:130] > # ]
	I0918 19:19:15.693652  712152 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0918 19:19:15.693670  712152 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0918 19:19:15.693682  712152 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0918 19:19:15.693690  712152 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0918 19:19:15.693697  712152 command_runner.go:130] > #
	I0918 19:19:15.693703  712152 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0918 19:19:15.693710  712152 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0918 19:19:15.693718  712152 command_runner.go:130] > #  runtime_type = "oci"
	I0918 19:19:15.693724  712152 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0918 19:19:15.693730  712152 command_runner.go:130] > #  privileged_without_host_devices = false
	I0918 19:19:15.693745  712152 command_runner.go:130] > #  allowed_annotations = []
	I0918 19:19:15.693752  712152 command_runner.go:130] > # Where:
	I0918 19:19:15.693764  712152 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0918 19:19:15.693775  712152 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0918 19:19:15.693786  712152 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0918 19:19:15.693797  712152 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0918 19:19:15.693805  712152 command_runner.go:130] > #   in $PATH.
	I0918 19:19:15.693819  712152 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0918 19:19:15.693830  712152 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0918 19:19:15.693838  712152 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0918 19:19:15.693846  712152 command_runner.go:130] > #   state.
	I0918 19:19:15.693855  712152 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0918 19:19:15.693865  712152 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0918 19:19:15.693876  712152 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0918 19:19:15.693892  712152 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0918 19:19:15.693904  712152 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0918 19:19:15.693921  712152 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0918 19:19:15.693931  712152 command_runner.go:130] > #   The currently recognized values are:
	I0918 19:19:15.693950  712152 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0918 19:19:15.693971  712152 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0918 19:19:15.693985  712152 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0918 19:19:15.693994  712152 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0918 19:19:15.694008  712152 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0918 19:19:15.694021  712152 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0918 19:19:15.694077  712152 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0918 19:19:15.694092  712152 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0918 19:19:15.694099  712152 command_runner.go:130] > #   should be moved to the container's cgroup
	I0918 19:19:15.694109  712152 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0918 19:19:15.694116  712152 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0918 19:19:15.694124  712152 command_runner.go:130] > runtime_type = "oci"
	I0918 19:19:15.694130  712152 command_runner.go:130] > runtime_root = "/run/runc"
	I0918 19:19:15.694153  712152 command_runner.go:130] > runtime_config_path = ""
	I0918 19:19:15.694165  712152 command_runner.go:130] > monitor_path = ""
	I0918 19:19:15.694170  712152 command_runner.go:130] > monitor_cgroup = ""
	I0918 19:19:15.694175  712152 command_runner.go:130] > monitor_exec_cgroup = ""
	I0918 19:19:15.694195  712152 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0918 19:19:15.694204  712152 command_runner.go:130] > # running containers
	I0918 19:19:15.694210  712152 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0918 19:19:15.694234  712152 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0918 19:19:15.694313  712152 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0918 19:19:15.694329  712152 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0918 19:19:15.694336  712152 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0918 19:19:15.694342  712152 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0918 19:19:15.694351  712152 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0918 19:19:15.694357  712152 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0918 19:19:15.694381  712152 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0918 19:19:15.694394  712152 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0918 19:19:15.694406  712152 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0918 19:19:15.694415  712152 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0918 19:19:15.694423  712152 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0918 19:19:15.694436  712152 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0918 19:19:15.694455  712152 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0918 19:19:15.694466  712152 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0918 19:19:15.694481  712152 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0918 19:19:15.694494  712152 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0918 19:19:15.694502  712152 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0918 19:19:15.694511  712152 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0918 19:19:15.694526  712152 command_runner.go:130] > # Example:
	I0918 19:19:15.694533  712152 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0918 19:19:15.694542  712152 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0918 19:19:15.694554  712152 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0918 19:19:15.694564  712152 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0918 19:19:15.694572  712152 command_runner.go:130] > # cpuset = 0
	I0918 19:19:15.694577  712152 command_runner.go:130] > # cpushares = "0-1"
	I0918 19:19:15.694581  712152 command_runner.go:130] > # Where:
	I0918 19:19:15.694590  712152 command_runner.go:130] > # The workload name is workload-type.
	I0918 19:19:15.694606  712152 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0918 19:19:15.694616  712152 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0918 19:19:15.694626  712152 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0918 19:19:15.694639  712152 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0918 19:19:15.694650  712152 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0918 19:19:15.694654  712152 command_runner.go:130] > # 
	I0918 19:19:15.694662  712152 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0918 19:19:15.694676  712152 command_runner.go:130] > #
	I0918 19:19:15.694687  712152 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0918 19:19:15.694699  712152 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0918 19:19:15.694710  712152 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0918 19:19:15.694720  712152 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0918 19:19:15.694731  712152 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0918 19:19:15.694736  712152 command_runner.go:130] > [crio.image]
	I0918 19:19:15.694749  712152 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0918 19:19:15.694758  712152 command_runner.go:130] > # default_transport = "docker://"
	I0918 19:19:15.694766  712152 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0918 19:19:15.694778  712152 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0918 19:19:15.694788  712152 command_runner.go:130] > # global_auth_file = ""
	I0918 19:19:15.694794  712152 command_runner.go:130] > # The image used to instantiate infra containers.
	I0918 19:19:15.694804  712152 command_runner.go:130] > # This option supports live configuration reload.
	I0918 19:19:15.694824  712152 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0918 19:19:15.694833  712152 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0918 19:19:15.694840  712152 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0918 19:19:15.694850  712152 command_runner.go:130] > # This option supports live configuration reload.
	I0918 19:19:15.694859  712152 command_runner.go:130] > # pause_image_auth_file = ""
	I0918 19:19:15.694867  712152 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0918 19:19:15.694877  712152 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0918 19:19:15.694894  712152 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0918 19:19:15.694904  712152 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0918 19:19:15.694910  712152 command_runner.go:130] > # pause_command = "/pause"
	I0918 19:19:15.694918  712152 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0918 19:19:15.694929  712152 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0918 19:19:15.694940  712152 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0918 19:19:15.694951  712152 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0918 19:19:15.694960  712152 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0918 19:19:15.694975  712152 command_runner.go:130] > # signature_policy = ""
	I0918 19:19:15.694986  712152 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0918 19:19:15.694994  712152 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0918 19:19:15.694999  712152 command_runner.go:130] > # changing them here.
	I0918 19:19:15.695008  712152 command_runner.go:130] > # insecure_registries = [
	I0918 19:19:15.695012  712152 command_runner.go:130] > # ]
	I0918 19:19:15.695122  712152 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0918 19:19:15.695139  712152 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0918 19:19:15.695359  712152 command_runner.go:130] > # image_volumes = "mkdir"
	I0918 19:19:15.695376  712152 command_runner.go:130] > # Temporary directory to use for storing big files
	I0918 19:19:15.695383  712152 command_runner.go:130] > # big_files_temporary_dir = ""
	I0918 19:19:15.695394  712152 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0918 19:19:15.695402  712152 command_runner.go:130] > # CNI plugins.
	I0918 19:19:15.695408  712152 command_runner.go:130] > [crio.network]
	I0918 19:19:15.695426  712152 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0918 19:19:15.695436  712152 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0918 19:19:15.695445  712152 command_runner.go:130] > # cni_default_network = ""
	I0918 19:19:15.695453  712152 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0918 19:19:15.695459  712152 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0918 19:19:15.695469  712152 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0918 19:19:15.695477  712152 command_runner.go:130] > # plugin_dirs = [
	I0918 19:19:15.695482  712152 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0918 19:19:15.695489  712152 command_runner.go:130] > # ]
	I0918 19:19:15.695503  712152 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0918 19:19:15.695511  712152 command_runner.go:130] > [crio.metrics]
	I0918 19:19:15.695518  712152 command_runner.go:130] > # Globally enable or disable metrics support.
	I0918 19:19:15.695526  712152 command_runner.go:130] > # enable_metrics = false
	I0918 19:19:15.695532  712152 command_runner.go:130] > # Specify enabled metrics collectors.
	I0918 19:19:15.695538  712152 command_runner.go:130] > # Per default all metrics are enabled.
	I0918 19:19:15.695546  712152 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0918 19:19:15.695557  712152 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0918 19:19:15.695568  712152 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0918 19:19:15.695582  712152 command_runner.go:130] > # metrics_collectors = [
	I0918 19:19:15.695591  712152 command_runner.go:130] > # 	"operations",
	I0918 19:19:15.695598  712152 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0918 19:19:15.695606  712152 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0918 19:19:15.695612  712152 command_runner.go:130] > # 	"operations_errors",
	I0918 19:19:15.695617  712152 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0918 19:19:15.695626  712152 command_runner.go:130] > # 	"image_pulls_by_name",
	I0918 19:19:15.695632  712152 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0918 19:19:15.695640  712152 command_runner.go:130] > # 	"image_pulls_failures",
	I0918 19:19:15.695652  712152 command_runner.go:130] > # 	"image_pulls_successes",
	I0918 19:19:15.695660  712152 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0918 19:19:15.695671  712152 command_runner.go:130] > # 	"image_layer_reuse",
	I0918 19:19:15.695680  712152 command_runner.go:130] > # 	"containers_oom_total",
	I0918 19:19:15.695685  712152 command_runner.go:130] > # 	"containers_oom",
	I0918 19:19:15.695690  712152 command_runner.go:130] > # 	"processes_defunct",
	I0918 19:19:15.695695  712152 command_runner.go:130] > # 	"operations_total",
	I0918 19:19:15.695700  712152 command_runner.go:130] > # 	"operations_latency_seconds",
	I0918 19:19:15.695706  712152 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0918 19:19:15.695712  712152 command_runner.go:130] > # 	"operations_errors_total",
	I0918 19:19:15.695717  712152 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0918 19:19:15.695729  712152 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0918 19:19:15.695734  712152 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0918 19:19:15.695740  712152 command_runner.go:130] > # 	"image_pulls_success_total",
	I0918 19:19:15.695745  712152 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0918 19:19:15.695750  712152 command_runner.go:130] > # 	"containers_oom_count_total",
	I0918 19:19:15.695755  712152 command_runner.go:130] > # ]
	I0918 19:19:15.695761  712152 command_runner.go:130] > # The port on which the metrics server will listen.
	I0918 19:19:15.695766  712152 command_runner.go:130] > # metrics_port = 9090
	I0918 19:19:15.695772  712152 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0918 19:19:15.695794  712152 command_runner.go:130] > # metrics_socket = ""
	I0918 19:19:15.695803  712152 command_runner.go:130] > # The certificate for the secure metrics server.
	I0918 19:19:15.695811  712152 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0918 19:19:15.695818  712152 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0918 19:19:15.695824  712152 command_runner.go:130] > # certificate on any modification event.
	I0918 19:19:15.695829  712152 command_runner.go:130] > # metrics_cert = ""
	I0918 19:19:15.695836  712152 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0918 19:19:15.695843  712152 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0918 19:19:15.695848  712152 command_runner.go:130] > # metrics_key = ""
	I0918 19:19:15.695855  712152 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0918 19:19:15.695860  712152 command_runner.go:130] > [crio.tracing]
	I0918 19:19:15.695873  712152 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0918 19:19:15.695878  712152 command_runner.go:130] > # enable_tracing = false
	I0918 19:19:15.695885  712152 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0918 19:19:15.695891  712152 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0918 19:19:15.695897  712152 command_runner.go:130] > # Number of samples to collect per million spans.
	I0918 19:19:15.696101  712152 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0918 19:19:15.696113  712152 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0918 19:19:15.696128  712152 command_runner.go:130] > [crio.stats]
	I0918 19:19:15.696137  712152 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0918 19:19:15.696144  712152 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0918 19:19:15.696153  712152 command_runner.go:130] > # stats_collection_period = 0
	I0918 19:19:15.696822  712152 command_runner.go:130] ! time="2023-09-18 19:19:15.686884292Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0918 19:19:15.696846  712152 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0918 19:19:15.696924  712152 cni.go:84] Creating CNI manager for ""
	I0918 19:19:15.696949  712152 cni.go:136] 2 nodes found, recommending kindnet
	I0918 19:19:15.696958  712152 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0918 19:19:15.696981  712152 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-689235 NodeName:multinode-689235-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0918 19:19:15.697129  712152 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-689235-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0918 19:19:15.697196  712152 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-689235-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:multinode-689235 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0918 19:19:15.697277  712152 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I0918 19:19:15.707179  712152 command_runner.go:130] > kubeadm
	I0918 19:19:15.707199  712152 command_runner.go:130] > kubectl
	I0918 19:19:15.707204  712152 command_runner.go:130] > kubelet
	I0918 19:19:15.708592  712152 binaries.go:44] Found k8s binaries, skipping transfer
	I0918 19:19:15.708672  712152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0918 19:19:15.719408  712152 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0918 19:19:15.742376  712152 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0918 19:19:15.764991  712152 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0918 19:19:15.769719  712152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0918 19:19:15.785073  712152 host.go:66] Checking if "multinode-689235" exists ...
	I0918 19:19:15.785372  712152 start.go:304] JoinCluster: &{Name:multinode-689235 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:multinode-689235 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 19:19:15.785464  712152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0918 19:19:15.785514  712152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235
	I0918 19:19:15.785904  712152 config.go:182] Loaded profile config "multinode-689235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0918 19:19:15.804215  712152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235/id_rsa Username:docker}
	I0918 19:19:15.976769  712152 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 9dl6uo.6rvsf4s2s6gy4mob --discovery-token-ca-cert-hash sha256:1471e1bb7c66f1f1f8363746a1e5f2ae35a8554d6ad2342a0b3973b70608e7c8 
	I0918 19:19:15.976818  712152 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0918 19:19:15.976847  712152 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9dl6uo.6rvsf4s2s6gy4mob --discovery-token-ca-cert-hash sha256:1471e1bb7c66f1f1f8363746a1e5f2ae35a8554d6ad2342a0b3973b70608e7c8 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-689235-m02"
	I0918 19:19:16.034160  712152 command_runner.go:130] > [preflight] Running pre-flight checks
	I0918 19:19:16.071296  712152 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0918 19:19:16.071317  712152 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1044-aws
	I0918 19:19:16.071324  712152 command_runner.go:130] > OS: Linux
	I0918 19:19:16.071330  712152 command_runner.go:130] > CGROUPS_CPU: enabled
	I0918 19:19:16.071337  712152 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0918 19:19:16.071343  712152 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0918 19:19:16.071349  712152 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0918 19:19:16.071356  712152 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0918 19:19:16.071362  712152 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0918 19:19:16.071370  712152 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0918 19:19:16.071376  712152 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0918 19:19:16.071385  712152 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0918 19:19:16.187177  712152 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0918 19:19:16.187201  712152 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0918 19:19:16.221180  712152 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0918 19:19:16.221427  712152 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0918 19:19:16.221494  712152 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0918 19:19:16.328840  712152 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0918 19:19:19.346229  712152 command_runner.go:130] > This node has joined the cluster:
	I0918 19:19:19.346290  712152 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0918 19:19:19.346313  712152 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0918 19:19:19.346744  712152 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0918 19:19:19.350202  712152 command_runner.go:130] ! W0918 19:19:16.033422    1039 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0918 19:19:19.350233  712152 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1044-aws\n", err: exit status 1
	I0918 19:19:19.350245  712152 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0918 19:19:19.350267  712152 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9dl6uo.6rvsf4s2s6gy4mob --discovery-token-ca-cert-hash sha256:1471e1bb7c66f1f1f8363746a1e5f2ae35a8554d6ad2342a0b3973b70608e7c8 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-689235-m02": (3.373399183s)
	I0918 19:19:19.350283  712152 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0918 19:19:19.569362  712152 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0918 19:19:19.569445  712152 start.go:306] JoinCluster complete in 3.784071126s
	I0918 19:19:19.569471  712152 cni.go:84] Creating CNI manager for ""
	I0918 19:19:19.569491  712152 cni.go:136] 2 nodes found, recommending kindnet
	I0918 19:19:19.569567  712152 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0918 19:19:19.574268  712152 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0918 19:19:19.574290  712152 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I0918 19:19:19.574297  712152 command_runner.go:130] > Device: 36h/54d	Inode: 1308310     Links: 1
	I0918 19:19:19.574305  712152 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0918 19:19:19.574317  712152 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I0918 19:19:19.574324  712152 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I0918 19:19:19.574330  712152 command_runner.go:130] > Change: 2023-09-18 18:55:16.104663439 +0000
	I0918 19:19:19.574336  712152 command_runner.go:130] >  Birth: 2023-09-18 18:55:16.060663193 +0000
	I0918 19:19:19.574589  712152 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.2/kubectl ...
	I0918 19:19:19.574601  712152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0918 19:19:19.596464  712152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0918 19:19:19.882319  712152 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0918 19:19:19.882342  712152 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0918 19:19:19.882350  712152 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0918 19:19:19.882356  712152 command_runner.go:130] > daemonset.apps/kindnet configured
	I0918 19:19:19.882747  712152 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 19:19:19.883036  712152 kapi.go:59] client config for multinode-689235: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/client.crt", KeyFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/client.key", CAFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1697f50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 19:19:19.883359  712152 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0918 19:19:19.883374  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:19.883383  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:19.883390  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:19.885939  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:19.885960  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:19.885969  712152 round_trippers.go:580]     Audit-Id: 4fc37144-1b0c-4fce-b61d-01a6335f6f3b
	I0918 19:19:19.885975  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:19.885981  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:19.885987  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:19.885993  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:19.886000  712152 round_trippers.go:580]     Content-Length: 291
	I0918 19:19:19.886006  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:19 GMT
	I0918 19:19:19.886271  712152 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"afeed66f-a03c-4d03-a96f-db9cbbb7a8b0","resourceVersion":"423","creationTimestamp":"2023-09-18T19:18:16Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0918 19:19:19.886382  712152 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-689235" context rescaled to 1 replicas
	I0918 19:19:19.886406  712152 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0918 19:19:19.895059  712152 out.go:177] * Verifying Kubernetes components...
	I0918 19:19:19.897164  712152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 19:19:19.911728  712152 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 19:19:19.912023  712152 kapi.go:59] client config for multinode-689235: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/client.crt", KeyFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/profiles/multinode-689235/client.key", CAFile:"/home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1697f50), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0918 19:19:19.912306  712152 node_ready.go:35] waiting up to 6m0s for node "multinode-689235-m02" to be "Ready" ...
	I0918 19:19:19.912396  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:19.912408  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:19.912418  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:19.912425  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:19.915447  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:19.915471  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:19.915480  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:19 GMT
	I0918 19:19:19.915486  712152 round_trippers.go:580]     Audit-Id: 58d441fe-5a01-4f92-b633-c2fa03c09af6
	I0918 19:19:19.915493  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:19.915499  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:19.915505  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:19.915511  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:19.915639  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"459","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0918 19:19:19.916081  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:19.916096  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:19.916105  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:19.916112  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:19.918905  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:19.918928  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:19.918936  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:19 GMT
	I0918 19:19:19.918943  712152 round_trippers.go:580]     Audit-Id: 48eb2d07-2fcc-4171-a332-8f308d289171
	I0918 19:19:19.918949  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:19.918955  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:19.918961  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:19.918972  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:19.919398  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"459","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I0918 19:19:20.420721  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:20.420746  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:20.420756  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:20.420764  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:20.423377  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:20.423441  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:20.423464  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:20.423484  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:20.423518  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:20.423540  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:20.423559  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:20 GMT
	I0918 19:19:20.423580  712152 round_trippers.go:580]     Audit-Id: 43061374-bb1d-4f6a-b477-089b34b08fac
	I0918 19:19:20.423721  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"462","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0918 19:19:20.920012  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:20.920036  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:20.920046  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:20.920053  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:20.927401  712152 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0918 19:19:20.927490  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:20.927513  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:20.927531  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:20.927582  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:20.927601  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:20.927630  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:20 GMT
	I0918 19:19:20.927653  712152 round_trippers.go:580]     Audit-Id: 104038a4-e517-4e19-84e9-ab70a8bf07ca
	I0918 19:19:20.928609  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"462","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0918 19:19:21.419975  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:21.420005  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:21.420015  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:21.420023  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:21.422436  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:21.422460  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:21.422471  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:21 GMT
	I0918 19:19:21.422478  712152 round_trippers.go:580]     Audit-Id: e6c21aea-c760-4d88-98f9-11f33a7fc556
	I0918 19:19:21.422484  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:21.422491  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:21.422497  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:21.422507  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:21.422707  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"462","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0918 19:19:21.920477  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:21.920500  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:21.920509  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:21.920517  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:21.922861  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:21.922882  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:21.922891  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:21.922897  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:21.922903  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:21 GMT
	I0918 19:19:21.922909  712152 round_trippers.go:580]     Audit-Id: 0a0be143-c9cd-474d-b8fd-cb7274252fe7
	I0918 19:19:21.922915  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:21.922921  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:21.923528  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"462","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0918 19:19:21.923938  712152 node_ready.go:58] node "multinode-689235-m02" has status "Ready":"False"
	I0918 19:19:22.419936  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:22.419957  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:22.419968  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:22.419975  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:22.422393  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:22.422415  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:22.422424  712152 round_trippers.go:580]     Audit-Id: 2fa9309a-90e6-4036-be5e-1003885151ab
	I0918 19:19:22.422431  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:22.422437  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:22.422443  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:22.422449  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:22.422459  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:22 GMT
	I0918 19:19:22.422600  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"462","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0918 19:19:22.920744  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:22.920768  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:22.920779  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:22.920786  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:22.923262  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:22.923282  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:22.923291  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:22.923297  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:22.923305  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:22.923311  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:22.923318  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:22 GMT
	I0918 19:19:22.923324  712152 round_trippers.go:580]     Audit-Id: 2cdfd471-be9a-4fce-8130-b027dcc50d57
	I0918 19:19:22.923438  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"462","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0918 19:19:23.420579  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:23.420604  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:23.420614  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:23.420621  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:23.423961  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:19:23.423981  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:23.423990  712152 round_trippers.go:580]     Audit-Id: 7523ddd8-b38e-41f3-8a96-f2bb6ab6ebff
	I0918 19:19:23.423997  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:23.424003  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:23.424009  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:23.424016  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:23.424022  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:23 GMT
	I0918 19:19:23.424147  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"462","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0918 19:19:23.920552  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:23.920575  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:23.920584  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:23.920591  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:23.922992  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:23.923019  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:23.923027  712152 round_trippers.go:580]     Audit-Id: be161d20-b0ac-422e-852a-a2fec317e4eb
	I0918 19:19:23.923035  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:23.923041  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:23.923047  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:23.923053  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:23.923060  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:23 GMT
	I0918 19:19:23.923155  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"462","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0918 19:19:24.420206  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:24.420237  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:24.420248  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:24.420255  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:24.422735  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:24.422761  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:24.422769  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:24.422776  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:24.422782  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:24.422789  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:24 GMT
	I0918 19:19:24.422795  712152 round_trippers.go:580]     Audit-Id: 4999c785-31fd-4eb6-b45d-b5bf1ab2e0c9
	I0918 19:19:24.422802  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:24.423010  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"462","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0918 19:19:24.423392  712152 node_ready.go:58] node "multinode-689235-m02" has status "Ready":"False"
	I0918 19:19:24.920709  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:24.920734  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:24.920743  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:24.920751  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:24.923287  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:24.923308  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:24.923316  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:24.923322  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:24.923329  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:24.923337  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:24 GMT
	I0918 19:19:24.923343  712152 round_trippers.go:580]     Audit-Id: aa95efbb-3ca8-4986-bd1e-252cdf785a9f
	I0918 19:19:24.923349  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:24.923465  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"462","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0918 19:19:25.420623  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:25.420649  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:25.420659  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:25.420672  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:25.423257  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:25.423283  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:25.423292  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:25.423299  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:25.423305  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:25 GMT
	I0918 19:19:25.423312  712152 round_trippers.go:580]     Audit-Id: 0f5083ec-9131-4efe-8b81-8950f0cb0985
	I0918 19:19:25.423318  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:25.423325  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:25.423578  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"462","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0918 19:19:25.920007  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:25.920030  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:25.920040  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:25.920047  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:25.922726  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:25.922752  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:25.922764  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:25.922770  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:25.922779  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:25.922792  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:25 GMT
	I0918 19:19:25.922798  712152 round_trippers.go:580]     Audit-Id: 494318f7-fea6-442c-ac8b-b5695f7cff0e
	I0918 19:19:25.922804  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:25.922966  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"462","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0918 19:19:26.420019  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:26.420043  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:26.420053  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:26.420060  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:26.422648  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:26.422673  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:26.422681  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:26.422688  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:26.422694  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:26.422701  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:26 GMT
	I0918 19:19:26.422707  712152 round_trippers.go:580]     Audit-Id: 0845001f-c9cc-412b-802f-ebb3d830e113
	I0918 19:19:26.422713  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:26.423109  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"462","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0918 19:19:26.423480  712152 node_ready.go:58] node "multinode-689235-m02" has status "Ready":"False"
	I0918 19:19:26.920810  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:26.920834  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:26.920845  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:26.920853  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:26.923376  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:26.923395  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:26.923403  712152 round_trippers.go:580]     Audit-Id: b84c66b1-978c-4b5d-bca9-8fb193cb67a0
	I0918 19:19:26.923409  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:26.923415  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:26.923421  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:26.923428  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:26.923434  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:26 GMT
	I0918 19:19:26.923928  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"462","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0918 19:19:27.419977  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:27.420007  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:27.420016  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:27.420024  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:27.422393  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:27.422417  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:27.422426  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:27 GMT
	I0918 19:19:27.422433  712152 round_trippers.go:580]     Audit-Id: 3385b8e5-6568-43fb-bd67-80cfa469376a
	I0918 19:19:27.422439  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:27.422445  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:27.422451  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:27.422462  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:27.422654  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"462","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0918 19:19:27.920779  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:27.920800  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:27.920809  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:27.920818  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:27.923381  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:27.923405  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:27.923413  712152 round_trippers.go:580]     Audit-Id: 3bb1931d-e203-4fb4-aab9-072be3fe8ee7
	I0918 19:19:27.923420  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:27.923426  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:27.923432  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:27.923439  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:27.923450  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:27 GMT
	I0918 19:19:27.923652  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"462","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0918 19:19:28.420478  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:28.420503  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:28.420514  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:28.420521  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:28.423135  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:28.423159  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:28.423168  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:28 GMT
	I0918 19:19:28.423175  712152 round_trippers.go:580]     Audit-Id: b0496922-35d6-4efa-85c8-58ffe764e929
	I0918 19:19:28.423181  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:28.423187  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:28.423194  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:28.423203  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:28.423592  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"462","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0918 19:19:28.423990  712152 node_ready.go:58] node "multinode-689235-m02" has status "Ready":"False"
	I0918 19:19:28.920925  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:28.920946  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:28.920956  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:28.920963  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:28.923407  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:28.923431  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:28.923441  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:28.923447  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:28.923454  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:28 GMT
	I0918 19:19:28.923461  712152 round_trippers.go:580]     Audit-Id: 483ed873-fd28-4fbd-afa0-243d134305cc
	I0918 19:19:28.923473  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:28.923488  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:28.923688  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"462","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5292 chars]
	I0918 19:19:29.420374  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:29.420396  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:29.420406  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:29.420413  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:29.422852  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:29.422873  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:29.422881  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:29.422888  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:29 GMT
	I0918 19:19:29.422896  712152 round_trippers.go:580]     Audit-Id: 7d3d2428-438d-4428-9003-e2e83e5a186b
	I0918 19:19:29.422903  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:29.422910  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:29.422916  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:29.423016  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:29.919995  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:29.920019  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:29.920029  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:29.920036  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:29.922588  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:29.922616  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:29.922624  712152 round_trippers.go:580]     Audit-Id: 6212ccdc-8755-4cfc-9890-cda52e5e8d7d
	I0918 19:19:29.922640  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:29.922646  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:29.922652  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:29.922660  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:29.922666  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:29 GMT
	I0918 19:19:29.922782  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:30.420014  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:30.420041  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:30.420051  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:30.420059  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:30.422637  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:30.422663  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:30.422672  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:30.422679  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:30.422685  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:30 GMT
	I0918 19:19:30.422691  712152 round_trippers.go:580]     Audit-Id: 59bbee68-6bd5-4757-8f54-62d96ec7e031
	I0918 19:19:30.422700  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:30.422706  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:30.422863  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:30.919927  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:30.919952  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:30.919963  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:30.919972  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:30.922362  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:30.922389  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:30.922397  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:30.922404  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:30.922410  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:30 GMT
	I0918 19:19:30.922416  712152 round_trippers.go:580]     Audit-Id: 6f903f5a-21bd-44e1-be6f-8d946f032769
	I0918 19:19:30.922423  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:30.922429  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:30.922594  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:30.922982  712152 node_ready.go:58] node "multinode-689235-m02" has status "Ready":"False"
	I0918 19:19:31.420454  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:31.420478  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:31.420489  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:31.420497  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:31.422985  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:31.423005  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:31.423013  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:31.423020  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:31 GMT
	I0918 19:19:31.423028  712152 round_trippers.go:580]     Audit-Id: b355c715-e8fa-4c10-9187-7a8a14a96225
	I0918 19:19:31.423034  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:31.423049  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:31.423055  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:31.423250  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:31.920178  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:31.920202  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:31.920212  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:31.920220  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:31.922762  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:31.922787  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:31.922795  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:31.922802  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:31 GMT
	I0918 19:19:31.922808  712152 round_trippers.go:580]     Audit-Id: 4b5e6633-befd-4282-b8cb-9c03068eb85f
	I0918 19:19:31.922814  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:31.922834  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:31.922841  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:31.923544  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:32.419995  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:32.420019  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:32.420029  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:32.420037  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:32.422489  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:32.422510  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:32.422518  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:32.422524  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:32.422531  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:32.422537  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:32.422543  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:32 GMT
	I0918 19:19:32.422549  712152 round_trippers.go:580]     Audit-Id: 19737c04-d4ee-4ad1-888e-640ed7132d12
	I0918 19:19:32.422682  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:32.920421  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:32.920440  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:32.920450  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:32.920457  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:32.923172  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:32.923202  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:32.923212  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:32.923219  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:32 GMT
	I0918 19:19:32.923226  712152 round_trippers.go:580]     Audit-Id: b075604d-0aad-4e33-af2d-b9ebc3a1e604
	I0918 19:19:32.923233  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:32.923240  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:32.923246  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:32.923345  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:32.923726  712152 node_ready.go:58] node "multinode-689235-m02" has status "Ready":"False"
	I0918 19:19:33.420131  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:33.420158  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:33.420168  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:33.420175  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:33.422732  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:33.422756  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:33.422766  712152 round_trippers.go:580]     Audit-Id: a5d0e248-8bb6-43c5-a214-c66beebdcf23
	I0918 19:19:33.422773  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:33.422779  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:33.422785  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:33.422791  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:33.422797  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:33 GMT
	I0918 19:19:33.422964  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:33.920263  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:33.920288  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:33.920300  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:33.920307  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:33.922794  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:33.922817  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:33.922832  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:33.922838  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:33.922844  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:33.922850  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:33 GMT
	I0918 19:19:33.922856  712152 round_trippers.go:580]     Audit-Id: 89a88597-dfde-4fe7-9f7f-b55aaa15545f
	I0918 19:19:33.922862  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:33.923036  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:34.420036  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:34.420061  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:34.420071  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:34.420079  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:34.422586  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:34.422607  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:34.422615  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:34 GMT
	I0918 19:19:34.422622  712152 round_trippers.go:580]     Audit-Id: 7ecf67a0-4d43-4b2b-9e45-e491117019ff
	I0918 19:19:34.422638  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:34.422646  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:34.422652  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:34.422658  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:34.422811  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:34.920960  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:34.920994  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:34.921004  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:34.921012  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:34.923512  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:34.923534  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:34.923543  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:34.923550  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:34 GMT
	I0918 19:19:34.923556  712152 round_trippers.go:580]     Audit-Id: 1b9c0a3a-ad99-40d5-93ee-f35f7da47150
	I0918 19:19:34.923562  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:34.923568  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:34.923574  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:34.923652  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:34.924060  712152 node_ready.go:58] node "multinode-689235-m02" has status "Ready":"False"
	I0918 19:19:35.420661  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:35.420693  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:35.420703  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:35.420711  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:35.423341  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:35.423363  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:35.423372  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:35.423378  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:35.423385  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:35.423391  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:35.423398  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:35 GMT
	I0918 19:19:35.423404  712152 round_trippers.go:580]     Audit-Id: c672f38b-2e2e-4320-bab1-5323af80d1c4
	I0918 19:19:35.423546  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:35.920593  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:35.920617  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:35.920627  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:35.920634  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:35.923148  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:35.923168  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:35.923177  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:35.923184  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:35.923190  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:35.923197  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:35.923203  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:35 GMT
	I0918 19:19:35.923209  712152 round_trippers.go:580]     Audit-Id: a8b5e203-3967-40f3-bc69-900414021f1d
	I0918 19:19:35.923322  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:36.420887  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:36.420911  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:36.420921  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:36.420928  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:36.423421  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:36.423441  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:36.423449  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:36 GMT
	I0918 19:19:36.423456  712152 round_trippers.go:580]     Audit-Id: 1eb1c42a-65d7-4af4-82c2-ce1686983551
	I0918 19:19:36.423462  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:36.423468  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:36.423474  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:36.423480  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:36.423604  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:36.920569  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:36.920595  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:36.920604  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:36.920633  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:36.923145  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:36.923172  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:36.923181  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:36.923188  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:36.923195  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:36.923202  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:36.923208  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:36 GMT
	I0918 19:19:36.923215  712152 round_trippers.go:580]     Audit-Id: 4bc3ec75-0291-444e-b3ca-27c0340f5976
	I0918 19:19:36.923343  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:37.419988  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:37.420010  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:37.420020  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:37.420028  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:37.422811  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:37.422864  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:37.422873  712152 round_trippers.go:580]     Audit-Id: 5da79764-bd0c-475d-b1a5-0da8809fed1d
	I0918 19:19:37.422879  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:37.422886  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:37.422892  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:37.422898  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:37.422905  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:37 GMT
	I0918 19:19:37.423102  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:37.423483  712152 node_ready.go:58] node "multinode-689235-m02" has status "Ready":"False"
	I0918 19:19:37.920750  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:37.920772  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:37.920782  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:37.920788  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:37.923212  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:37.923232  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:37.923240  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:37.923247  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:37.923253  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:37.923259  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:37.923265  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:37 GMT
	I0918 19:19:37.923271  712152 round_trippers.go:580]     Audit-Id: 7d477629-823c-43df-83bd-324839a4a48f
	I0918 19:19:37.923367  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:38.420535  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:38.420560  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:38.420570  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:38.420578  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:38.423121  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:38.423147  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:38.423155  712152 round_trippers.go:580]     Audit-Id: 9e69ba55-4510-4a8e-82ad-e35292b4e528
	I0918 19:19:38.423162  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:38.423168  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:38.423174  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:38.423180  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:38.423192  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:38 GMT
	I0918 19:19:38.423513  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:38.920307  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:38.920330  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:38.920340  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:38.920347  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:38.922896  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:38.922916  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:38.922924  712152 round_trippers.go:580]     Audit-Id: 338eb804-c3ff-4b21-a1ab-cabc5883f5aa
	I0918 19:19:38.922930  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:38.922936  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:38.922942  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:38.922948  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:38.922954  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:38 GMT
	I0918 19:19:38.923077  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:39.420152  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:39.420174  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:39.420184  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:39.420191  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:39.422604  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:39.422624  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:39.422633  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:39.422639  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:39.422645  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:39.422651  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:39 GMT
	I0918 19:19:39.422657  712152 round_trippers.go:580]     Audit-Id: fbb641aa-8292-42d9-a170-406216a20b90
	I0918 19:19:39.422663  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:39.422773  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:39.920567  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:39.920591  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:39.920601  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:39.920608  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:39.923096  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:39.923117  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:39.923125  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:39.923132  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:39 GMT
	I0918 19:19:39.923138  712152 round_trippers.go:580]     Audit-Id: a9cbfbf2-7b7b-42e1-afe7-5bdc3fe6d619
	I0918 19:19:39.923144  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:39.923151  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:39.923157  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:39.923270  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:39.923658  712152 node_ready.go:58] node "multinode-689235-m02" has status "Ready":"False"
	I0918 19:19:40.420570  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:40.420593  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:40.420602  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:40.420610  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:40.423064  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:40.423085  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:40.423094  712152 round_trippers.go:580]     Audit-Id: 286259d4-53f4-4633-9168-a01e1161a96d
	I0918 19:19:40.423100  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:40.423106  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:40.423123  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:40.423130  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:40.423136  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:40 GMT
	I0918 19:19:40.423283  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:40.920049  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:40.920073  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:40.920083  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:40.920091  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:40.922573  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:40.922597  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:40.922606  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:40 GMT
	I0918 19:19:40.922612  712152 round_trippers.go:580]     Audit-Id: 28ab5c7b-0632-4a1a-a882-fc408e0b1261
	I0918 19:19:40.922619  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:40.922625  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:40.922631  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:40.922637  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:40.922742  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:41.420812  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:41.420838  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:41.420849  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:41.420856  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:41.423452  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:41.423473  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:41.423482  712152 round_trippers.go:580]     Audit-Id: c84f5866-97ce-4fe3-b267-fdb8ff535a6b
	I0918 19:19:41.423488  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:41.423498  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:41.423504  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:41.423511  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:41.423517  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:41 GMT
	I0918 19:19:41.423626  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:41.920347  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:41.920371  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:41.920381  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:41.920388  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:41.922852  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:41.922881  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:41.922890  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:41.922897  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:41.922904  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:41.922910  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:41.922916  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:41 GMT
	I0918 19:19:41.922923  712152 round_trippers.go:580]     Audit-Id: c7e057b4-ce4a-406d-aa64-c8a76e4bed88
	I0918 19:19:41.923055  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:42.420103  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:42.420130  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:42.420140  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:42.420160  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:42.422694  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:42.422716  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:42.422725  712152 round_trippers.go:580]     Audit-Id: b10fc297-f2e8-4770-a4c3-b1ff82e2dafb
	I0918 19:19:42.422731  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:42.422738  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:42.422744  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:42.422750  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:42.422756  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:42 GMT
	I0918 19:19:42.422941  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:42.423334  712152 node_ready.go:58] node "multinode-689235-m02" has status "Ready":"False"
	I0918 19:19:42.919953  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:42.919976  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:42.919987  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:42.919995  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:42.922377  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:42.922396  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:42.922404  712152 round_trippers.go:580]     Audit-Id: 95084483-24af-4bc5-ba64-0b47e0718533
	I0918 19:19:42.922411  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:42.922417  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:42.922434  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:42.922441  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:42.922447  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:42 GMT
	I0918 19:19:42.922562  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:43.420377  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:43.420400  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:43.420410  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:43.420417  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:43.422891  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:43.422915  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:43.422923  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:43.422930  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:43.422936  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:43.422971  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:43.422985  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:43 GMT
	I0918 19:19:43.422992  712152 round_trippers.go:580]     Audit-Id: 93a1a1ef-6931-4af9-a78c-86c448ccdbd4
	I0918 19:19:43.423145  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:43.920584  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:43.920608  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:43.920618  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:43.920625  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:43.923136  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:43.923162  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:43.923171  712152 round_trippers.go:580]     Audit-Id: 370e7d43-5bdb-40a8-9f5b-b791923f83aa
	I0918 19:19:43.923178  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:43.923184  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:43.923190  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:43.923196  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:43.923203  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:43 GMT
	I0918 19:19:43.923285  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:44.420299  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:44.420325  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:44.420335  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:44.420342  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:44.422736  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:44.422757  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:44.422765  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:44.422772  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:44.422778  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:44.422785  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:44.422792  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:44 GMT
	I0918 19:19:44.422798  712152 round_trippers.go:580]     Audit-Id: f1cf7773-8275-4cdc-901f-575682624b68
	I0918 19:19:44.423006  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:44.423368  712152 node_ready.go:58] node "multinode-689235-m02" has status "Ready":"False"
	I0918 19:19:44.919938  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:44.919962  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:44.919971  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:44.919978  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:44.922647  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:44.922709  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:44.922720  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:44 GMT
	I0918 19:19:44.922727  712152 round_trippers.go:580]     Audit-Id: 6e94779f-91d1-4bc6-a8f1-65b766eb1ca1
	I0918 19:19:44.922733  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:44.922739  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:44.922745  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:44.922770  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:44.922894  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:45.419955  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:45.419981  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:45.419992  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:45.419999  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:45.422686  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:45.422711  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:45.422720  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:45.422727  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:45.422735  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:45.422741  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:45 GMT
	I0918 19:19:45.422748  712152 round_trippers.go:580]     Audit-Id: 7ae34574-81d6-46b4-98a7-ebe6e8bd11f6
	I0918 19:19:45.422755  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:45.423292  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:45.920017  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:45.920042  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:45.920051  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:45.920059  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:45.922485  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:45.922511  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:45.922519  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:45.922526  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:45.922532  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:45 GMT
	I0918 19:19:45.922539  712152 round_trippers.go:580]     Audit-Id: 5598a871-a088-4535-a870-a762069e9f68
	I0918 19:19:45.922545  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:45.922552  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:45.923059  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:46.420562  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:46.420594  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:46.420605  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:46.420612  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:46.423132  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:46.423162  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:46.423171  712152 round_trippers.go:580]     Audit-Id: 78249bdd-45ba-4c19-bc10-a4f9d2e5a1ae
	I0918 19:19:46.423178  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:46.423184  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:46.423190  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:46.423197  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:46.423210  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:46 GMT
	I0918 19:19:46.423320  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:46.423717  712152 node_ready.go:58] node "multinode-689235-m02" has status "Ready":"False"
	I0918 19:19:46.920406  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:46.920429  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:46.920438  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:46.920446  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:46.924483  712152 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0918 19:19:46.924569  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:46.924593  712152 round_trippers.go:580]     Audit-Id: 6c8c5d02-c299-4a3f-9cf1-2e01bc2b8a63
	I0918 19:19:46.924611  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:46.924628  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:46.924643  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:46.924666  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:46.924685  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:46 GMT
	I0918 19:19:46.924801  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:47.419999  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:47.420023  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:47.420033  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:47.420040  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:47.422654  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:47.422677  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:47.422687  712152 round_trippers.go:580]     Audit-Id: 6b612232-b53f-454c-81e8-6c818101a757
	I0918 19:19:47.422693  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:47.422700  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:47.422707  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:47.422713  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:47.422719  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:47 GMT
	I0918 19:19:47.422834  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:47.920098  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:47.920125  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:47.920135  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:47.920142  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:47.922629  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:47.922705  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:47.922721  712152 round_trippers.go:580]     Audit-Id: f9f3e50c-948d-4273-866c-5e2b7316d48a
	I0918 19:19:47.922728  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:47.922734  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:47.922740  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:47.922746  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:47.922753  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:47 GMT
	I0918 19:19:47.922867  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:48.419937  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:48.419959  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:48.419969  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:48.419978  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:48.423425  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:19:48.423452  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:48.423460  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:48 GMT
	I0918 19:19:48.423467  712152 round_trippers.go:580]     Audit-Id: 6af950d2-a28a-42dd-90c5-edfeb796bcc3
	I0918 19:19:48.423473  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:48.423480  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:48.423486  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:48.423492  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:48.423603  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:48.424007  712152 node_ready.go:58] node "multinode-689235-m02" has status "Ready":"False"
	I0918 19:19:48.920741  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:48.920765  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:48.920775  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:48.920783  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:48.923106  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:48.923131  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:48.923140  712152 round_trippers.go:580]     Audit-Id: bb83cce2-3114-4410-91ee-0648f4839d58
	I0918 19:19:48.923147  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:48.923153  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:48.923158  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:48.923165  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:48.923176  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:48 GMT
	I0918 19:19:48.923325  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:49.419982  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:49.420006  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:49.420016  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:49.420023  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:49.422467  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:49.422487  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:49.422498  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:49.422504  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:49 GMT
	I0918 19:19:49.422511  712152 round_trippers.go:580]     Audit-Id: 1541f73c-33fb-43c9-94e0-20aeffad01ae
	I0918 19:19:49.422517  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:49.422523  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:49.422530  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:49.422760  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:49.920266  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:49.920289  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:49.920299  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:49.920306  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:49.922818  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:49.922846  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:49.922854  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:49.922867  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:49.922874  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:49 GMT
	I0918 19:19:49.922881  712152 round_trippers.go:580]     Audit-Id: 44ab0399-7234-4af9-ac19-054b65bb8147
	I0918 19:19:49.922920  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:49.922929  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:49.923135  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:50.420928  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:50.420953  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:50.420963  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:50.420970  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:50.423429  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:50.423453  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:50.423461  712152 round_trippers.go:580]     Audit-Id: 30eb706a-ef83-4079-a6c3-b9ae6bd54514
	I0918 19:19:50.423468  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:50.423474  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:50.423480  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:50.423487  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:50.423493  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:50 GMT
	I0918 19:19:50.423666  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:50.424060  712152 node_ready.go:58] node "multinode-689235-m02" has status "Ready":"False"
	I0918 19:19:50.919982  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:50.920005  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:50.920015  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:50.920024  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:50.922543  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:50.922606  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:50.922627  712152 round_trippers.go:580]     Audit-Id: c4c28b92-48da-49e2-bff4-9a8415a9626b
	I0918 19:19:50.922651  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:50.922685  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:50.922710  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:50.922721  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:50.922728  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:50 GMT
	I0918 19:19:50.922874  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:51.420010  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:51.420033  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:51.420043  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:51.420050  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:51.422724  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:51.422806  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:51.422872  712152 round_trippers.go:580]     Audit-Id: 379d5532-7736-40e2-a2e0-85258a7f8421
	I0918 19:19:51.422902  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:51.422931  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:51.422954  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:51.422976  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:51.422994  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:51 GMT
	I0918 19:19:51.423126  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:51.920493  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:51.920515  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:51.920524  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:51.920531  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:51.923053  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:51.923090  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:51.923099  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:51.923130  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:51.923136  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:51.923143  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:51.923150  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:51 GMT
	I0918 19:19:51.923159  712152 round_trippers.go:580]     Audit-Id: 8dd4a5ef-2290-4aee-852b-73e1226c8710
	I0918 19:19:51.923246  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:52.420883  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:52.420907  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:52.420917  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:52.420924  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:52.423449  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:52.423476  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:52.423485  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:52.423491  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:52.423498  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:52 GMT
	I0918 19:19:52.423504  712152 round_trippers.go:580]     Audit-Id: d2e9054e-bd6a-4285-82df-4329ccceed43
	I0918 19:19:52.423510  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:52.423517  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:52.423636  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"484","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:1
9Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{ [truncated 5561 chars]
	I0918 19:19:52.920744  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:52.920786  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:52.920796  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:52.920804  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:52.923437  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:52.923463  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:52.923471  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:52.923478  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:52.923484  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:52 GMT
	I0918 19:19:52.923491  712152 round_trippers.go:580]     Audit-Id: eadc4975-7259-430d-a7b5-cda2d276de63
	I0918 19:19:52.923498  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:52.923504  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:52.923649  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"507","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I0918 19:19:52.924043  712152 node_ready.go:49] node "multinode-689235-m02" has status "Ready":"True"
	I0918 19:19:52.924061  712152 node_ready.go:38] duration metric: took 33.01172015s waiting for node "multinode-689235-m02" to be "Ready" ...
	I0918 19:19:52.924072  712152 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 19:19:52.924147  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0918 19:19:52.924155  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:52.924162  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:52.924169  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:52.927693  712152 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0918 19:19:52.927719  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:52.927727  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:52.927733  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:52.927739  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:52.927746  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:52 GMT
	I0918 19:19:52.927752  712152 round_trippers.go:580]     Audit-Id: 551a27ae-8f81-4cb0-bd02-5f2725f0f6e7
	I0918 19:19:52.927762  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:52.929913  712152 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"507"},"items":[{"metadata":{"name":"coredns-5dd5756b68-52fpx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d643472b-4be9-4a29-bf6a-e83171d46b1c","resourceVersion":"419","creationTimestamp":"2023-09-18T19:18:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"50b21e75-4c2c-4915-bb6e-5bee1d42dabc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:18:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"50b21e75-4c2c-4915-bb6e-5bee1d42dabc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68972 chars]
	I0918 19:19:52.932945  712152 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-52fpx" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:52.933037  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-52fpx
	I0918 19:19:52.933047  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:52.933056  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:52.933064  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:52.935532  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:52.935555  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:52.935564  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:52.935571  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:52 GMT
	I0918 19:19:52.935577  712152 round_trippers.go:580]     Audit-Id: ca42990c-0f74-4c57-a38e-fb577064347e
	I0918 19:19:52.935583  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:52.935589  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:52.935595  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:52.935720  712152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-52fpx","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"d643472b-4be9-4a29-bf6a-e83171d46b1c","resourceVersion":"419","creationTimestamp":"2023-09-18T19:18:30Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"50b21e75-4c2c-4915-bb6e-5bee1d42dabc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:18:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"50b21e75-4c2c-4915-bb6e-5bee1d42dabc\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0918 19:19:52.936260  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:52.936284  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:52.936293  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:52.936300  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:52.938478  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:52.938500  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:52.938509  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:52 GMT
	I0918 19:19:52.938515  712152 round_trippers.go:580]     Audit-Id: c8eadd06-d977-4a8e-bf1c-cf5a18866519
	I0918 19:19:52.938522  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:52.938532  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:52.938544  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:52.938550  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:52.938760  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"400","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0918 19:19:52.939154  712152 pod_ready.go:92] pod "coredns-5dd5756b68-52fpx" in "kube-system" namespace has status "Ready":"True"
	I0918 19:19:52.939169  712152 pod_ready.go:81] duration metric: took 6.198063ms waiting for pod "coredns-5dd5756b68-52fpx" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:52.939180  712152 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-689235" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:52.939238  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-689235
	I0918 19:19:52.939248  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:52.939256  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:52.939262  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:52.941404  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:52.941420  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:52.941428  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:52 GMT
	I0918 19:19:52.941434  712152 round_trippers.go:580]     Audit-Id: 749d0161-05ce-41d9-aa4e-0f614f721d35
	I0918 19:19:52.941440  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:52.941446  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:52.941452  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:52.941458  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:52.941553  712152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-689235","namespace":"kube-system","uid":"1bc456e1-2455-4466-8f8f-6e27f3e804f2","resourceVersion":"387","creationTimestamp":"2023-09-18T19:18:17Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"58704de8334e799fab0624e8a943846a","kubernetes.io/config.mirror":"58704de8334e799fab0624e8a943846a","kubernetes.io/config.seen":"2023-09-18T19:18:16.900580807Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:18:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0918 19:19:52.941977  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:52.941984  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:52.941991  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:52.941998  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:52.944083  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:52.944136  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:52.944156  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:52.944175  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:52.944204  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:52.944226  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:52 GMT
	I0918 19:19:52.944241  712152 round_trippers.go:580]     Audit-Id: 5d219466-b0b2-46a6-98a6-c003c6e3a7b5
	I0918 19:19:52.944247  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:52.944387  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"400","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0918 19:19:52.944784  712152 pod_ready.go:92] pod "etcd-multinode-689235" in "kube-system" namespace has status "Ready":"True"
	I0918 19:19:52.944800  712152 pod_ready.go:81] duration metric: took 5.609506ms waiting for pod "etcd-multinode-689235" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:52.944817  712152 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-689235" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:52.944878  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-689235
	I0918 19:19:52.944887  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:52.944895  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:52.944901  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:52.947203  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:52.947259  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:52.947281  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:52.947314  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:52.947338  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:52.947359  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:52 GMT
	I0918 19:19:52.947390  712152 round_trippers.go:580]     Audit-Id: 32a49ee7-759c-4400-a18d-1e37cedb68e2
	I0918 19:19:52.947410  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:52.947629  712152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-689235","namespace":"kube-system","uid":"8fd4d983-6d28-45c4-8701-40cca4fbe65a","resourceVersion":"390","creationTimestamp":"2023-09-18T19:18:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"8cbdf887d99a1fc14e5f027ff73e02fd","kubernetes.io/config.mirror":"8cbdf887d99a1fc14e5f027ff73e02fd","kubernetes.io/config.seen":"2023-09-18T19:18:08.282472686Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:18:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0918 19:19:52.948180  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:52.948193  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:52.948202  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:52.948210  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:52.950330  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:52.950352  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:52.950360  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:52.950368  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:52.950384  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:52.950396  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:52.950402  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:52 GMT
	I0918 19:19:52.950409  712152 round_trippers.go:580]     Audit-Id: c4c1e8a2-8fd5-4a1b-9478-a366a5114165
	I0918 19:19:52.950686  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"400","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0918 19:19:52.951103  712152 pod_ready.go:92] pod "kube-apiserver-multinode-689235" in "kube-system" namespace has status "Ready":"True"
	I0918 19:19:52.951120  712152 pod_ready.go:81] duration metric: took 6.287654ms waiting for pod "kube-apiserver-multinode-689235" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:52.951133  712152 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-689235" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:52.951195  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-689235
	I0918 19:19:52.951203  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:52.951211  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:52.951218  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:52.953500  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:52.953565  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:52.953586  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:52.953605  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:52.953635  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:52 GMT
	I0918 19:19:52.953649  712152 round_trippers.go:580]     Audit-Id: 44f2ee50-9f13-4af9-bdd3-39d17d960470
	I0918 19:19:52.953655  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:52.953662  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:52.953859  712152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-689235","namespace":"kube-system","uid":"249188f1-89c0-4de2-b1fa-5d4ec581f882","resourceVersion":"389","creationTimestamp":"2023-09-18T19:18:17Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d981450df7224320f56b3d04a848ea78","kubernetes.io/config.mirror":"d981450df7224320f56b3d04a848ea78","kubernetes.io/config.seen":"2023-09-18T19:18:16.900573767Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:18:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0918 19:19:52.954385  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:52.954400  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:52.954408  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:52.954416  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:52.956690  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:52.956712  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:52.956728  712152 round_trippers.go:580]     Audit-Id: 79bc01a3-7b8f-4607-bd44-0b026128983d
	I0918 19:19:52.956735  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:52.956742  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:52.956751  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:52.956762  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:52.956769  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:52 GMT
	I0918 19:19:52.956879  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"400","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0918 19:19:52.957276  712152 pod_ready.go:92] pod "kube-controller-manager-multinode-689235" in "kube-system" namespace has status "Ready":"True"
	I0918 19:19:52.957294  712152 pod_ready.go:81] duration metric: took 6.150095ms waiting for pod "kube-controller-manager-multinode-689235" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:52.957305  712152 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fgvhl" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:53.121665  712152 request.go:629] Waited for 164.292478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fgvhl
	I0918 19:19:53.121762  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fgvhl
	I0918 19:19:53.121777  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:53.121787  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:53.121798  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:53.124482  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:53.124504  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:53.124513  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:53.124521  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:53 GMT
	I0918 19:19:53.124527  712152 round_trippers.go:580]     Audit-Id: 45f11a08-c8b9-4a89-8ae1-b9b6c9433a18
	I0918 19:19:53.124533  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:53.124539  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:53.124546  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:53.124664  712152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fgvhl","generateName":"kube-proxy-","namespace":"kube-system","uid":"aedacfda-e3d4-48ea-8612-a3a48c64a15d","resourceVersion":"381","creationTimestamp":"2023-09-18T19:18:30Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8848d295-fe10-4902-8477-fffd231f32ff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:18:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8848d295-fe10-4902-8477-fffd231f32ff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0918 19:19:53.321563  712152 request.go:629] Waited for 196.368159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:53.321639  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:53.321654  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:53.321694  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:53.321705  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:53.324671  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:53.324699  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:53.324708  712152 round_trippers.go:580]     Audit-Id: e27e3709-3032-42bc-b305-1ae445d8ce82
	I0918 19:19:53.324721  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:53.324727  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:53.324733  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:53.324744  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:53.324752  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:53 GMT
	I0918 19:19:53.325084  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"400","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0918 19:19:53.325566  712152 pod_ready.go:92] pod "kube-proxy-fgvhl" in "kube-system" namespace has status "Ready":"True"
	I0918 19:19:53.325582  712152 pod_ready.go:81] duration metric: took 368.268837ms waiting for pod "kube-proxy-fgvhl" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:53.325594  712152 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rz57v" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:53.520911  712152 request.go:629] Waited for 195.245315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rz57v
	I0918 19:19:53.520993  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rz57v
	I0918 19:19:53.520999  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:53.521009  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:53.521020  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:53.523904  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:53.523940  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:53.523949  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:53.523956  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:53 GMT
	I0918 19:19:53.523963  712152 round_trippers.go:580]     Audit-Id: 9dfd3896-9022-4e04-8191-24dc7f25847e
	I0918 19:19:53.523969  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:53.523978  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:53.523987  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:53.524102  712152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-rz57v","generateName":"kube-proxy-","namespace":"kube-system","uid":"6968bee2-de9e-43b6-9b8b-7e416c2f0342","resourceVersion":"473","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"controller-revision-hash":"5cbdb8dcbd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8848d295-fe10-4902-8477-fffd231f32ff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8848d295-fe10-4902-8477-fffd231f32ff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0918 19:19:53.721554  712152 request.go:629] Waited for 196.967553ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:53.721639  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235-m02
	I0918 19:19:53.721650  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:53.721660  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:53.721667  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:53.724288  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:53.724318  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:53.724326  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:53.724333  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:53 GMT
	I0918 19:19:53.724339  712152 round_trippers.go:580]     Audit-Id: 3e6eccf7-a118-4b78-9444-5bdbcd1489b5
	I0918 19:19:53.724346  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:53.724368  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:53.724379  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:53.724497  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235-m02","uid":"097117e2-35ff-448d-ae76-f008b19fbc7c","resourceVersion":"507","creationTimestamp":"2023-09-18T19:19:19Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:19:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I0918 19:19:53.724904  712152 pod_ready.go:92] pod "kube-proxy-rz57v" in "kube-system" namespace has status "Ready":"True"
	I0918 19:19:53.724923  712152 pod_ready.go:81] duration metric: took 399.314696ms waiting for pod "kube-proxy-rz57v" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:53.724935  712152 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-689235" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:53.921319  712152 request.go:629] Waited for 196.317689ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-689235
	I0918 19:19:53.921419  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-689235
	I0918 19:19:53.921429  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:53.921439  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:53.921474  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:53.924074  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:53.924100  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:53.924109  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:53.924115  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:53.924122  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:53.924128  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:53.924134  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:53 GMT
	I0918 19:19:53.924141  712152 round_trippers.go:580]     Audit-Id: 0ae8773a-fccd-40a5-9999-e961fccc92ac
	I0918 19:19:53.924253  712152 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-689235","namespace":"kube-system","uid":"59a3807d-aea7-4edd-a329-f208496dd249","resourceVersion":"388","creationTimestamp":"2023-09-18T19:18:17Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4cd8ef9e9e1c62bf0f4649ea7d8fab42","kubernetes.io/config.mirror":"4cd8ef9e9e1c62bf0f4649ea7d8fab42","kubernetes.io/config.seen":"2023-09-18T19:18:16.900579010Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-18T19:18:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0918 19:19:54.120931  712152 request.go:629] Waited for 196.265939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:54.121041  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-689235
	I0918 19:19:54.121084  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:54.121104  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:54.121116  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:54.123940  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:54.123983  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:54.123996  712152 round_trippers.go:580]     Audit-Id: 0b9a9c97-b32f-40be-908b-90658f5be098
	I0918 19:19:54.124006  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:54.124014  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:54.124020  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:54.124033  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:54.124043  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:54 GMT
	I0918 19:19:54.124155  712152 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"400","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-18T19:18:13Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I0918 19:19:54.124587  712152 pod_ready.go:92] pod "kube-scheduler-multinode-689235" in "kube-system" namespace has status "Ready":"True"
	I0918 19:19:54.124603  712152 pod_ready.go:81] duration metric: took 399.659182ms waiting for pod "kube-scheduler-multinode-689235" in "kube-system" namespace to be "Ready" ...
	I0918 19:19:54.124615  712152 pod_ready.go:38] duration metric: took 1.200528789s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0918 19:19:54.124634  712152 system_svc.go:44] waiting for kubelet service to be running ....
	I0918 19:19:54.124701  712152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 19:19:54.139320  712152 system_svc.go:56] duration metric: took 14.666664ms WaitForService to wait for kubelet.
	I0918 19:19:54.139348  712152 kubeadm.go:581] duration metric: took 34.252920079s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0918 19:19:54.139371  712152 node_conditions.go:102] verifying NodePressure condition ...
	I0918 19:19:54.321581  712152 request.go:629] Waited for 182.123052ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0918 19:19:54.321652  712152 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0918 19:19:54.321658  712152 round_trippers.go:469] Request Headers:
	I0918 19:19:54.321668  712152 round_trippers.go:473]     Accept: application/json, */*
	I0918 19:19:54.321675  712152 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I0918 19:19:54.324313  712152 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0918 19:19:54.324452  712152 round_trippers.go:577] Response Headers:
	I0918 19:19:54.324480  712152 round_trippers.go:580]     Content-Type: application/json
	I0918 19:19:54.324503  712152 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32aab265-2485-446d-9b8c-e4360c2d9560
	I0918 19:19:54.324561  712152 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: df5d531b-7b98-4743-a538-f0588c4bf2ff
	I0918 19:19:54.324585  712152 round_trippers.go:580]     Date: Mon, 18 Sep 2023 19:19:54 GMT
	I0918 19:19:54.324605  712152 round_trippers.go:580]     Audit-Id: 86fa8b38-6893-4e5e-9a82-8fd561c1976d
	I0918 19:19:54.324621  712152 round_trippers.go:580]     Cache-Control: no-cache, private
	I0918 19:19:54.324847  712152 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"508"},"items":[{"metadata":{"name":"multinode-689235","uid":"27b371be-d95c-4abc-98d3-46787b412be3","resourceVersion":"400","creationTimestamp":"2023-09-18T19:18:13Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-689235","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36","minikube.k8s.io/name":"multinode-689235","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_18T19_18_17_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12452 chars]
	I0918 19:19:54.325505  712152 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0918 19:19:54.325525  712152 node_conditions.go:123] node cpu capacity is 2
	I0918 19:19:54.325536  712152 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0918 19:19:54.325541  712152 node_conditions.go:123] node cpu capacity is 2
	I0918 19:19:54.325545  712152 node_conditions.go:105] duration metric: took 186.169507ms to run NodePressure ...
	I0918 19:19:54.325556  712152 start.go:228] waiting for startup goroutines ...
	I0918 19:19:54.325584  712152 start.go:242] writing updated cluster config ...
	I0918 19:19:54.325892  712152 ssh_runner.go:195] Run: rm -f paused
	I0918 19:19:54.396787  712152 start.go:600] kubectl: 1.28.2, cluster: 1.28.2 (minor skew: 0)
	I0918 19:19:54.399139  712152 out.go:177] * Done! kubectl is now configured to use "multinode-689235" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 18 19:19:02 multinode-689235 crio[905]: time="2023-09-18 19:19:02.680409841Z" level=info msg="Starting container: d7adfa07076ccdc74165935d1b0e44d6c14fa0710f1a181cf628ff5ac75cb66e" id=d0949247-ccbd-4dd9-b06d-bf95e75958e4 name=/runtime.v1.RuntimeService/StartContainer
	Sep 18 19:19:02 multinode-689235 crio[905]: time="2023-09-18 19:19:02.694305653Z" level=info msg="Started container" PID=1947 containerID=d7adfa07076ccdc74165935d1b0e44d6c14fa0710f1a181cf628ff5ac75cb66e description=kube-system/storage-provisioner/storage-provisioner id=d0949247-ccbd-4dd9-b06d-bf95e75958e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ac43fedee4cf59ba6baa4944e0e3c9bf4e79d9887edb9da4d1bc65d6be8fabea
	Sep 18 19:19:02 multinode-689235 crio[905]: time="2023-09-18 19:19:02.714042659Z" level=info msg="Created container bc2927fcf948d3272a012365d0696d83d88ff953cae6b6d2d2216759951820c5: kube-system/coredns-5dd5756b68-52fpx/coredns" id=a0a68be4-8daf-4bfe-bdf9-d0e0d2754f71 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 18 19:19:02 multinode-689235 crio[905]: time="2023-09-18 19:19:02.714618473Z" level=info msg="Starting container: bc2927fcf948d3272a012365d0696d83d88ff953cae6b6d2d2216759951820c5" id=20cac845-26b0-4e5b-8c78-6f308f1b6732 name=/runtime.v1.RuntimeService/StartContainer
	Sep 18 19:19:02 multinode-689235 crio[905]: time="2023-09-18 19:19:02.728578900Z" level=info msg="Started container" PID=1968 containerID=bc2927fcf948d3272a012365d0696d83d88ff953cae6b6d2d2216759951820c5 description=kube-system/coredns-5dd5756b68-52fpx/coredns id=20cac845-26b0-4e5b-8c78-6f308f1b6732 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2c7b3cdd7049ce158e890e86f1028765e436203adcf59afb09379bcd3796bf4b
	Sep 18 19:19:55 multinode-689235 crio[905]: time="2023-09-18 19:19:55.687907641Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-rmmxk/POD" id=c74f1d5b-ba3d-4048-a8af-3fe19c0bae04 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 18 19:19:55 multinode-689235 crio[905]: time="2023-09-18 19:19:55.687980240Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 18 19:19:55 multinode-689235 crio[905]: time="2023-09-18 19:19:55.704290020Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-rmmxk Namespace:default ID:c20fd4b5177888e4cd8e17fb6ef1cab01c73e82a437927188ffe5eadef2a75c1 UID:27f127bc-82d4-4213-8ee8-498dc898217f NetNS:/var/run/netns/1bb5de2d-5ade-4e33-9f98-1a7dd89b706b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 18 19:19:55 multinode-689235 crio[905]: time="2023-09-18 19:19:55.704469015Z" level=info msg="Adding pod default_busybox-5bc68d56bd-rmmxk to CNI network \"kindnet\" (type=ptp)"
	Sep 18 19:19:55 multinode-689235 crio[905]: time="2023-09-18 19:19:55.716602345Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-rmmxk Namespace:default ID:c20fd4b5177888e4cd8e17fb6ef1cab01c73e82a437927188ffe5eadef2a75c1 UID:27f127bc-82d4-4213-8ee8-498dc898217f NetNS:/var/run/netns/1bb5de2d-5ade-4e33-9f98-1a7dd89b706b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 18 19:19:55 multinode-689235 crio[905]: time="2023-09-18 19:19:55.716751063Z" level=info msg="Checking pod default_busybox-5bc68d56bd-rmmxk for CNI network kindnet (type=ptp)"
	Sep 18 19:19:55 multinode-689235 crio[905]: time="2023-09-18 19:19:55.743085381Z" level=info msg="Ran pod sandbox c20fd4b5177888e4cd8e17fb6ef1cab01c73e82a437927188ffe5eadef2a75c1 with infra container: default/busybox-5bc68d56bd-rmmxk/POD" id=c74f1d5b-ba3d-4048-a8af-3fe19c0bae04 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 18 19:19:55 multinode-689235 crio[905]: time="2023-09-18 19:19:55.744121799Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=816a23f0-e438-41c3-8f06-f5f2615e482e name=/runtime.v1.ImageService/ImageStatus
	Sep 18 19:19:55 multinode-689235 crio[905]: time="2023-09-18 19:19:55.744339589Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=816a23f0-e438-41c3-8f06-f5f2615e482e name=/runtime.v1.ImageService/ImageStatus
	Sep 18 19:19:55 multinode-689235 crio[905]: time="2023-09-18 19:19:55.745293102Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=eb0916b5-5f1c-4963-b348-69c3d7874a86 name=/runtime.v1.ImageService/PullImage
	Sep 18 19:19:55 multinode-689235 crio[905]: time="2023-09-18 19:19:55.747130090Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 18 19:19:56 multinode-689235 crio[905]: time="2023-09-18 19:19:56.381898715Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 18 19:19:57 multinode-689235 crio[905]: time="2023-09-18 19:19:57.596051418Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=eb0916b5-5f1c-4963-b348-69c3d7874a86 name=/runtime.v1.ImageService/PullImage
	Sep 18 19:19:57 multinode-689235 crio[905]: time="2023-09-18 19:19:57.597247074Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=f01d5256-b3d4-428c-af99-2bf3b080b0cf name=/runtime.v1.ImageService/ImageStatus
	Sep 18 19:19:57 multinode-689235 crio[905]: time="2023-09-18 19:19:57.597943946Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f01d5256-b3d4-428c-af99-2bf3b080b0cf name=/runtime.v1.ImageService/ImageStatus
	Sep 18 19:19:57 multinode-689235 crio[905]: time="2023-09-18 19:19:57.600044730Z" level=info msg="Creating container: default/busybox-5bc68d56bd-rmmxk/busybox" id=4b16ce8e-f1f4-4ed9-9be3-3d5c77efb4e1 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 18 19:19:57 multinode-689235 crio[905]: time="2023-09-18 19:19:57.600137940Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 18 19:19:57 multinode-689235 crio[905]: time="2023-09-18 19:19:57.694615225Z" level=info msg="Created container d4eff789389eba6f3c57ae34875bdfb0a258b867295bd3d2f88c934adf496a62: default/busybox-5bc68d56bd-rmmxk/busybox" id=4b16ce8e-f1f4-4ed9-9be3-3d5c77efb4e1 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 18 19:19:57 multinode-689235 crio[905]: time="2023-09-18 19:19:57.695421480Z" level=info msg="Starting container: d4eff789389eba6f3c57ae34875bdfb0a258b867295bd3d2f88c934adf496a62" id=645a9ae5-5965-474f-b218-53a117e0c744 name=/runtime.v1.RuntimeService/StartContainer
	Sep 18 19:19:57 multinode-689235 crio[905]: time="2023-09-18 19:19:57.706468939Z" level=info msg="Started container" PID=2107 containerID=d4eff789389eba6f3c57ae34875bdfb0a258b867295bd3d2f88c934adf496a62 description=default/busybox-5bc68d56bd-rmmxk/busybox id=645a9ae5-5965-474f-b218-53a117e0c744 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c20fd4b5177888e4cd8e17fb6ef1cab01c73e82a437927188ffe5eadef2a75c1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d4eff789389eb       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   6 seconds ago        Running             busybox                   0                   c20fd4b517788       busybox-5bc68d56bd-rmmxk
	bc2927fcf948d       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      About a minute ago   Running             coredns                   0                   2c7b3cdd7049c       coredns-5dd5756b68-52fpx
	d7adfa07076cc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      About a minute ago   Running             storage-provisioner       0                   ac43fedee4cf5       storage-provisioner
	a561db1125379       7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa                                      About a minute ago   Running             kube-proxy                0                   13b7c5da752c5       kube-proxy-fgvhl
	e7d88a302e928       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      About a minute ago   Running             kindnet-cni               0                   e9417a0c3f2f2       kindnet-5jgz2
	8d8de90fd380f       64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7                                      About a minute ago   Running             kube-scheduler            0                   fbdc95d79796e       kube-scheduler-multinode-689235
	368a2eaa1ce89       89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c                                      About a minute ago   Running             kube-controller-manager   0                   2c3558c75a6e4       kube-controller-manager-multinode-689235
	4518bed1b6e78       30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c                                      About a minute ago   Running             kube-apiserver            0                   a5ddd8785ea33       kube-apiserver-multinode-689235
	0d9141b099ada       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   26cd4152f05c5       etcd-multinode-689235
	
	* 
	* ==> coredns [bc2927fcf948d3272a012365d0696d83d88ff953cae6b6d2d2216759951820c5] <==
	* [INFO] 10.244.1.2:35594 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000134909s
	[INFO] 10.244.0.3:57642 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146134s
	[INFO] 10.244.0.3:40796 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009321701s
	[INFO] 10.244.0.3:49190 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118516s
	[INFO] 10.244.0.3:44276 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093022s
	[INFO] 10.244.0.3:56021 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001365552s
	[INFO] 10.244.0.3:33373 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000065469s
	[INFO] 10.244.0.3:55585 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005106s
	[INFO] 10.244.0.3:59701 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000053104s
	[INFO] 10.244.1.2:36434 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014565s
	[INFO] 10.244.1.2:49214 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000077728s
	[INFO] 10.244.1.2:43359 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103197s
	[INFO] 10.244.1.2:54594 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000086573s
	[INFO] 10.244.0.3:58939 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110392s
	[INFO] 10.244.0.3:45900 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000061161s
	[INFO] 10.244.0.3:54035 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058232s
	[INFO] 10.244.0.3:55986 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051898s
	[INFO] 10.244.1.2:45185 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000203463s
	[INFO] 10.244.1.2:56049 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000183123s
	[INFO] 10.244.1.2:53324 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000121125s
	[INFO] 10.244.1.2:33316 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000164054s
	[INFO] 10.244.0.3:58734 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082027s
	[INFO] 10.244.0.3:47671 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000046408s
	[INFO] 10.244.0.3:38319 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000051938s
	[INFO] 10.244.0.3:50366 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000043216s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-689235
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-689235
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ae2adeb20cc3a4b5f1decc7c8f53736ec04c4a36
	                    minikube.k8s.io/name=multinode-689235
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_18T19_18_17_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Sep 2023 19:18:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-689235
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Sep 2023 19:19:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Sep 2023 19:19:02 +0000   Mon, 18 Sep 2023 19:18:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Sep 2023 19:19:02 +0000   Mon, 18 Sep 2023 19:18:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Sep 2023 19:19:02 +0000   Mon, 18 Sep 2023 19:18:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Sep 2023 19:19:02 +0000   Mon, 18 Sep 2023 19:19:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-689235
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 a3664c37fa0a492e87119a42ccf3a41d
	  System UUID:                d7b74489-af53-4f63-8db1-a55d4e629dbc
	  Boot ID:                    43cd75a3-7352-4de5-a11c-da52fa8117dc
	  Kernel Version:             5.15.0-1044-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-rmmxk                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 coredns-5dd5756b68-52fpx                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     94s
	  kube-system                 etcd-multinode-689235                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         107s
	  kube-system                 kindnet-5jgz2                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      94s
	  kube-system                 kube-apiserver-multinode-689235             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-controller-manager-multinode-689235    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-proxy-fgvhl                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-scheduler-multinode-689235             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 92s   kube-proxy       
	  Normal  Starting                 108s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  107s  kubelet          Node multinode-689235 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s  kubelet          Node multinode-689235 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s  kubelet          Node multinode-689235 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           95s   node-controller  Node multinode-689235 event: Registered Node multinode-689235 in Controller
	  Normal  NodeReady                62s   kubelet          Node multinode-689235 status is now: NodeReady
	
	
	Name:               multinode-689235-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-689235-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Sep 2023 19:19:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-689235-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Sep 2023 19:19:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Sep 2023 19:19:52 +0000   Mon, 18 Sep 2023 19:19:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Sep 2023 19:19:52 +0000   Mon, 18 Sep 2023 19:19:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Sep 2023 19:19:52 +0000   Mon, 18 Sep 2023 19:19:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Sep 2023 19:19:52 +0000   Mon, 18 Sep 2023 19:19:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-689235-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022500Ki
	  pods:               110
	System Info:
	  Machine ID:                 70faa626d3a04f43972f15b5a5bb833a
	  System UUID:                f7d9085f-8767-421c-bd77-4a505d8200e6
	  Boot ID:                    43cd75a3-7352-4de5-a11c-da52fa8117dc
	  Kernel Version:             5.15.0-1044-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.2
	  Kube-Proxy Version:         v1.28.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-2bktr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-lhfgv               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      45s
	  kube-system                 kube-proxy-rz57v            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  RegisteredNode           45s                node-controller  Node multinode-689235-m02 event: Registered Node multinode-689235-m02 in Controller
	  Normal  NodeHasSufficientMemory  45s (x5 over 47s)  kubelet          Node multinode-689235-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x5 over 47s)  kubelet          Node multinode-689235-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x5 over 47s)  kubelet          Node multinode-689235-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12s                kubelet          Node multinode-689235-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001145] FS-Cache: O-key=[8] '7670ed0000000000'
	[  +0.000769] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000958] FS-Cache: N-cookie d=000000003f524057{9p.inode} n=00000000e6e16996
	[  +0.001039] FS-Cache: N-key=[8] '7670ed0000000000'
	[  +0.009586] FS-Cache: Duplicate cookie detected
	[  +0.000770] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
	[  +0.000960] FS-Cache: O-cookie d=000000003f524057{9p.inode} n=00000000a61dace4
	[  +0.001049] FS-Cache: O-key=[8] '7670ed0000000000'
	[  +0.000697] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
	[  +0.001086] FS-Cache: N-cookie d=000000003f524057{9p.inode} n=00000000520d2c99
	[  +0.001040] FS-Cache: N-key=[8] '7670ed0000000000'
	[  +1.832465] FS-Cache: Duplicate cookie detected
	[  +0.000739] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
	[  +0.001072] FS-Cache: O-cookie d=000000003f524057{9p.inode} n=00000000c48f847e
	[  +0.001054] FS-Cache: O-key=[8] '7570ed0000000000'
	[  +0.000714] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000940] FS-Cache: N-cookie d=000000003f524057{9p.inode} n=00000000d0411578
	[  +0.001048] FS-Cache: N-key=[8] '7570ed0000000000'
	[  +0.410305] FS-Cache: Duplicate cookie detected
	[  +0.000752] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
	[  +0.000955] FS-Cache: O-cookie d=000000003f524057{9p.inode} n=00000000cf4dd87c
	[  +0.001120] FS-Cache: O-key=[8] '7b70ed0000000000'
	[  +0.000698] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
	[  +0.000920] FS-Cache: N-cookie d=000000003f524057{9p.inode} n=00000000ed85fc4a
	[  +0.001085] FS-Cache: N-key=[8] '7b70ed0000000000'
	
	* 
	* ==> etcd [0d9141b099adaf4e5f86e04d45283113aaf3fa0ffdcbf37fa1edbdf37cfd96f9] <==
	* {"level":"info","ts":"2023-09-18T19:18:09.101096Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-09-18T19:18:09.101223Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-09-18T19:18:09.103394Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-18T19:18:09.103577Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-09-18T19:18:09.103833Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-09-18T19:18:09.105108Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-18T19:18:09.105226Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-18T19:18:09.58191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-18T19:18:09.582038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-18T19:18:09.582081Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-09-18T19:18:09.582134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-09-18T19:18:09.582169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-09-18T19:18:09.58222Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-09-18T19:18:09.582253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-09-18T19:18:09.585194Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-689235 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-18T19:18:09.585291Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-18T19:18:09.58645Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-18T19:18:09.586626Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T19:18:09.586851Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-18T19:18:09.587812Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-09-18T19:18:09.588281Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T19:18:09.588412Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T19:18:09.588474Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-18T19:18:09.603911Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-18T19:18:09.60401Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  19:20:04 up  3:02,  0 users,  load average: 1.81, 1.71, 1.62
	Linux multinode-689235 5.15.0-1044-aws #49~20.04.1-Ubuntu SMP Mon Aug 21 17:10:24 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [e7d88a302e928929307868ab0117fefe2a68138e8f94865201c3bae290368674] <==
	* I0918 19:19:01.849460       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0918 19:19:01.849490       1 main.go:227] handling current node
	I0918 19:19:11.856470       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0918 19:19:11.856610       1 main.go:227] handling current node
	I0918 19:19:21.868863       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0918 19:19:21.868891       1 main.go:227] handling current node
	I0918 19:19:21.868901       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0918 19:19:21.868907       1 main.go:250] Node multinode-689235-m02 has CIDR [10.244.1.0/24] 
	I0918 19:19:21.869059       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0918 19:19:31.873060       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0918 19:19:31.873090       1 main.go:227] handling current node
	I0918 19:19:31.873101       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0918 19:19:31.873107       1 main.go:250] Node multinode-689235-m02 has CIDR [10.244.1.0/24] 
	I0918 19:19:41.886693       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0918 19:19:41.886722       1 main.go:227] handling current node
	I0918 19:19:41.886741       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0918 19:19:41.886747       1 main.go:250] Node multinode-689235-m02 has CIDR [10.244.1.0/24] 
	I0918 19:19:51.891771       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0918 19:19:51.892109       1 main.go:227] handling current node
	I0918 19:19:51.892166       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0918 19:19:51.892199       1 main.go:250] Node multinode-689235-m02 has CIDR [10.244.1.0/24] 
	I0918 19:20:01.904972       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0918 19:20:01.905124       1 main.go:227] handling current node
	I0918 19:20:01.905161       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0918 19:20:01.905214       1 main.go:250] Node multinode-689235-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [4518bed1b6e783c08526b0075adcb0b0d9a0ad1cd5c514789c5213d741c870fe] <==
	* I0918 19:18:13.694186       1 controller.go:624] quota admission added evaluator for: namespaces
	I0918 19:18:13.704995       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0918 19:18:13.705381       1 aggregator.go:166] initial CRD sync complete...
	I0918 19:18:13.705442       1 autoregister_controller.go:141] Starting autoregister controller
	I0918 19:18:13.705471       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0918 19:18:13.705513       1 cache.go:39] Caches are synced for autoregister controller
	I0918 19:18:13.739274       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0918 19:18:13.747468       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0918 19:18:14.487403       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0918 19:18:14.492870       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0918 19:18:14.492897       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0918 19:18:15.063980       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0918 19:18:15.149182       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0918 19:18:15.297328       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0918 19:18:15.304206       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0918 19:18:15.305380       1 controller.go:624] quota admission added evaluator for: endpoints
	I0918 19:18:15.310871       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0918 19:18:15.606526       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0918 19:18:16.814636       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0918 19:18:16.830332       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0918 19:18:16.842834       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0918 19:18:29.901080       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0918 19:18:29.910253       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	E0918 19:19:58.846153       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0x400c5651a0), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0x400c570cd0), ResponseWriter:(*httpsnoop.rw)(0x400c570cd0), Flusher:(*httpsnoop.rw)(0x400c570cd0), CloseNotifier:(*httpsnoop.rw)(0x400c570cd0), Pusher:(*httpsnoop.rw)(0x400c570cd0)}}, encoder:(*versioning.codec)(0x400c2dbc20), memAllocator:(*runtime.Allocator)(0x400c1888d0)})
	E0918 19:20:01.815886       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.58.2:54488->192.168.58.3:10250: write: broken pipe
	
	* 
	* ==> kube-controller-manager [368a2eaa1ce89469c2e7df30dcc5749fddfdee6f21a8feaaa256032942abd80e] <==
	* I0918 19:18:30.850163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.835µs"
	I0918 19:19:02.221010       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="169.158µs"
	I0918 19:19:02.239006       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.35µs"
	I0918 19:19:03.142052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="116.874µs"
	I0918 19:19:03.196215       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.216253ms"
	I0918 19:19:03.196318       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.423µs"
	I0918 19:19:04.913116       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0918 19:19:19.125374       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-689235-m02\" does not exist"
	I0918 19:19:19.160688       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rz57v"
	I0918 19:19:19.166904       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-lhfgv"
	I0918 19:19:19.172559       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-689235-m02" podCIDRs=["10.244.1.0/24"]
	I0918 19:19:19.916424       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-689235-m02"
	I0918 19:19:19.916488       1 event.go:307] "Event occurred" object="multinode-689235-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-689235-m02 event: Registered Node multinode-689235-m02 in Controller"
	I0918 19:19:52.517367       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-689235-m02"
	I0918 19:19:55.320746       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0918 19:19:55.341253       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-2bktr"
	I0918 19:19:55.358533       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-rmmxk"
	I0918 19:19:55.376079       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.604385ms"
	I0918 19:19:55.394018       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="17.88929ms"
	I0918 19:19:55.420858       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="26.785446ms"
	I0918 19:19:55.420974       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="67.799µs"
	I0918 19:19:58.249031       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.878187ms"
	I0918 19:19:58.249891       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="71.549µs"
	I0918 19:19:58.832210       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="20.035677ms"
	I0918 19:19:58.833196       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="64.254µs"
	
	* 
	* ==> kube-proxy [a561db11253796c4c29d18cdf93f6d38c62f60f9506dfca359b10ade14b033c2] <==
	* I0918 19:18:31.599134       1 server_others.go:69] "Using iptables proxy"
	I0918 19:18:31.618084       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0918 19:18:31.731662       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0918 19:18:31.733864       1 server_others.go:152] "Using iptables Proxier"
	I0918 19:18:31.733897       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0918 19:18:31.733906       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0918 19:18:31.733953       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0918 19:18:31.734219       1 server.go:846] "Version info" version="v1.28.2"
	I0918 19:18:31.734233       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0918 19:18:31.735753       1 config.go:188] "Starting service config controller"
	I0918 19:18:31.735772       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0918 19:18:31.736156       1 config.go:97] "Starting endpoint slice config controller"
	I0918 19:18:31.736171       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0918 19:18:31.736621       1 config.go:315] "Starting node config controller"
	I0918 19:18:31.736638       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0918 19:18:31.837254       1 shared_informer.go:318] Caches are synced for service config
	I0918 19:18:31.837256       1 shared_informer.go:318] Caches are synced for node config
	I0918 19:18:31.837272       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [8d8de90fd380fcb72fe7a6de1f7b0cad69411bbdcc1f1abf090204a794b3bcdb] <==
	* W0918 19:18:13.919809       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0918 19:18:13.919935       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0918 19:18:13.924425       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0918 19:18:13.924818       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0918 19:18:13.924606       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 19:18:13.924929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0918 19:18:13.924657       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0918 19:18:13.925008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0918 19:18:13.924705       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0918 19:18:13.925091       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0918 19:18:13.924752       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0918 19:18:13.925176       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0918 19:18:13.924787       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 19:18:13.925264       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0918 19:18:13.925444       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 19:18:13.925499       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0918 19:18:14.736934       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0918 19:18:14.737061       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0918 19:18:14.770762       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0918 19:18:14.770894       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0918 19:18:14.772060       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0918 19:18:14.772148       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0918 19:18:14.850576       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0918 19:18:14.850703       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0918 19:18:15.307544       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Sep 18 19:18:30 multinode-689235 kubelet[1403]: I0918 19:18:30.154559    1403 topology_manager.go:215] "Topology Admit Handler" podUID="aedacfda-e3d4-48ea-8612-a3a48c64a15d" podNamespace="kube-system" podName="kube-proxy-fgvhl"
	Sep 18 19:18:30 multinode-689235 kubelet[1403]: W0918 19:18:30.172971    1403 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-689235" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-689235' and this object
	Sep 18 19:18:30 multinode-689235 kubelet[1403]: E0918 19:18:30.173022    1403 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:multinode-689235" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-689235' and this object
	Sep 18 19:18:30 multinode-689235 kubelet[1403]: W0918 19:18:30.173237    1403 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:multinode-689235" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-689235' and this object
	Sep 18 19:18:30 multinode-689235 kubelet[1403]: E0918 19:18:30.173262    1403 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:multinode-689235" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'multinode-689235' and this object
	Sep 18 19:18:30 multinode-689235 kubelet[1403]: I0918 19:18:30.228364    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aedacfda-e3d4-48ea-8612-a3a48c64a15d-xtables-lock\") pod \"kube-proxy-fgvhl\" (UID: \"aedacfda-e3d4-48ea-8612-a3a48c64a15d\") " pod="kube-system/kube-proxy-fgvhl"
	Sep 18 19:18:30 multinode-689235 kubelet[1403]: I0918 19:18:30.228424    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xxgd\" (UniqueName: \"kubernetes.io/projected/aedacfda-e3d4-48ea-8612-a3a48c64a15d-kube-api-access-7xxgd\") pod \"kube-proxy-fgvhl\" (UID: \"aedacfda-e3d4-48ea-8612-a3a48c64a15d\") " pod="kube-system/kube-proxy-fgvhl"
	Sep 18 19:18:30 multinode-689235 kubelet[1403]: I0918 19:18:30.228479    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aedacfda-e3d4-48ea-8612-a3a48c64a15d-kube-proxy\") pod \"kube-proxy-fgvhl\" (UID: \"aedacfda-e3d4-48ea-8612-a3a48c64a15d\") " pod="kube-system/kube-proxy-fgvhl"
	Sep 18 19:18:30 multinode-689235 kubelet[1403]: I0918 19:18:30.228509    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aedacfda-e3d4-48ea-8612-a3a48c64a15d-lib-modules\") pod \"kube-proxy-fgvhl\" (UID: \"aedacfda-e3d4-48ea-8612-a3a48c64a15d\") " pod="kube-system/kube-proxy-fgvhl"
	Sep 18 19:18:31 multinode-689235 kubelet[1403]: W0918 19:18:31.345516    1403 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e0b155a28412be3d94e22f1ca1010ac124c38296f3bdf609ef8b0f402546fbe5/crio-e9417a0c3f2f21fb5bcb1ce6f07cdbbcb22dbf3d83f73d3d4cf43034b68d5443 WatchSource:0}: Error finding container e9417a0c3f2f21fb5bcb1ce6f07cdbbcb22dbf3d83f73d3d4cf43034b68d5443: Status 404 returned error can't find the container with id e9417a0c3f2f21fb5bcb1ce6f07cdbbcb22dbf3d83f73d3d4cf43034b68d5443
	Sep 18 19:18:32 multinode-689235 kubelet[1403]: I0918 19:18:32.077917    1403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-5jgz2" podStartSLOduration=2.077875538 podCreationTimestamp="2023-09-18 19:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-18 19:18:32.07729335 +0000 UTC m=+15.286845167" watchObservedRunningTime="2023-09-18 19:18:32.077875538 +0000 UTC m=+15.287427347"
	Sep 18 19:18:32 multinode-689235 kubelet[1403]: I0918 19:18:32.078016    1403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fgvhl" podStartSLOduration=2.077999272 podCreationTimestamp="2023-09-18 19:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-18 19:18:32.063750836 +0000 UTC m=+15.273302662" watchObservedRunningTime="2023-09-18 19:18:32.077999272 +0000 UTC m=+15.287551089"
	Sep 18 19:19:02 multinode-689235 kubelet[1403]: I0918 19:19:02.182337    1403 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Sep 18 19:19:02 multinode-689235 kubelet[1403]: I0918 19:19:02.209845    1403 topology_manager.go:215] "Topology Admit Handler" podUID="e63a1107-d248-405b-b8a7-367a9a5682de" podNamespace="kube-system" podName="storage-provisioner"
	Sep 18 19:19:02 multinode-689235 kubelet[1403]: I0918 19:19:02.215551    1403 topology_manager.go:215] "Topology Admit Handler" podUID="d643472b-4be9-4a29-bf6a-e83171d46b1c" podNamespace="kube-system" podName="coredns-5dd5756b68-52fpx"
	Sep 18 19:19:02 multinode-689235 kubelet[1403]: I0918 19:19:02.263346    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d643472b-4be9-4a29-bf6a-e83171d46b1c-config-volume\") pod \"coredns-5dd5756b68-52fpx\" (UID: \"d643472b-4be9-4a29-bf6a-e83171d46b1c\") " pod="kube-system/coredns-5dd5756b68-52fpx"
	Sep 18 19:19:02 multinode-689235 kubelet[1403]: I0918 19:19:02.263462    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbkhn\" (UniqueName: \"kubernetes.io/projected/d643472b-4be9-4a29-bf6a-e83171d46b1c-kube-api-access-kbkhn\") pod \"coredns-5dd5756b68-52fpx\" (UID: \"d643472b-4be9-4a29-bf6a-e83171d46b1c\") " pod="kube-system/coredns-5dd5756b68-52fpx"
	Sep 18 19:19:02 multinode-689235 kubelet[1403]: I0918 19:19:02.263503    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e63a1107-d248-405b-b8a7-367a9a5682de-tmp\") pod \"storage-provisioner\" (UID: \"e63a1107-d248-405b-b8a7-367a9a5682de\") " pod="kube-system/storage-provisioner"
	Sep 18 19:19:02 multinode-689235 kubelet[1403]: I0918 19:19:02.263528    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmnw2\" (UniqueName: \"kubernetes.io/projected/e63a1107-d248-405b-b8a7-367a9a5682de-kube-api-access-lmnw2\") pod \"storage-provisioner\" (UID: \"e63a1107-d248-405b-b8a7-367a9a5682de\") " pod="kube-system/storage-provisioner"
	Sep 18 19:19:02 multinode-689235 kubelet[1403]: W0918 19:19:02.569459    1403 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e0b155a28412be3d94e22f1ca1010ac124c38296f3bdf609ef8b0f402546fbe5/crio-2c7b3cdd7049ce158e890e86f1028765e436203adcf59afb09379bcd3796bf4b WatchSource:0}: Error finding container 2c7b3cdd7049ce158e890e86f1028765e436203adcf59afb09379bcd3796bf4b: Status 404 returned error can't find the container with id 2c7b3cdd7049ce158e890e86f1028765e436203adcf59afb09379bcd3796bf4b
	Sep 18 19:19:03 multinode-689235 kubelet[1403]: I0918 19:19:03.168620    1403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-52fpx" podStartSLOduration=33.16857519 podCreationTimestamp="2023-09-18 19:18:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-18 19:19:03.143123726 +0000 UTC m=+46.352675543" watchObservedRunningTime="2023-09-18 19:19:03.16857519 +0000 UTC m=+46.378126999"
	Sep 18 19:19:03 multinode-689235 kubelet[1403]: I0918 19:19:03.181705    1403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=32.181659791 podCreationTimestamp="2023-09-18 19:18:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-18 19:19:03.169054544 +0000 UTC m=+46.378606353" watchObservedRunningTime="2023-09-18 19:19:03.181659791 +0000 UTC m=+46.391211608"
	Sep 18 19:19:55 multinode-689235 kubelet[1403]: I0918 19:19:55.385788    1403 topology_manager.go:215] "Topology Admit Handler" podUID="27f127bc-82d4-4213-8ee8-498dc898217f" podNamespace="default" podName="busybox-5bc68d56bd-rmmxk"
	Sep 18 19:19:55 multinode-689235 kubelet[1403]: I0918 19:19:55.420360    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dkrj\" (UniqueName: \"kubernetes.io/projected/27f127bc-82d4-4213-8ee8-498dc898217f-kube-api-access-2dkrj\") pod \"busybox-5bc68d56bd-rmmxk\" (UID: \"27f127bc-82d4-4213-8ee8-498dc898217f\") " pod="default/busybox-5bc68d56bd-rmmxk"
	Sep 18 19:19:55 multinode-689235 kubelet[1403]: W0918 19:19:55.740702    1403 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e0b155a28412be3d94e22f1ca1010ac124c38296f3bdf609ef8b0f402546fbe5/crio-c20fd4b5177888e4cd8e17fb6ef1cab01c73e82a437927188ffe5eadef2a75c1 WatchSource:0}: Error finding container c20fd4b5177888e4cd8e17fb6ef1cab01c73e82a437927188ffe5eadef2a75c1: Status 404 returned error can't find the container with id c20fd4b5177888e4cd8e17fb6ef1cab01c73e82a437927188ffe5eadef2a75c1
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-689235 -n multinode-689235
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-689235 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.98s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.3766263769.exe start -p running-upgrade-152014 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.3766263769.exe start -p running-upgrade-152014 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m1.281904778s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-152014 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-152014 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.857400775s)

                                                
                                                
-- stdout --
	* [running-upgrade-152014] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-152014 in cluster running-upgrade-152014
	* Pulling base image ...
	* Updating the running docker "running-upgrade-152014" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 19:37:20.619188  776986 out.go:296] Setting OutFile to fd 1 ...
	I0918 19:37:20.619415  776986 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 19:37:20.619444  776986 out.go:309] Setting ErrFile to fd 2...
	I0918 19:37:20.619464  776986 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 19:37:20.619837  776986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17263-642665/.minikube/bin
	I0918 19:37:20.621240  776986 out.go:303] Setting JSON to false
	I0918 19:37:20.622430  776986 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11986,"bootTime":1695053855,"procs":277,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0918 19:37:20.622545  776986 start.go:138] virtualization:  
	I0918 19:37:20.625473  776986 out.go:177] * [running-upgrade-152014] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0918 19:37:20.627963  776986 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 19:37:20.628089  776986 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0918 19:37:20.630076  776986 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:37:20.628136  776986 notify.go:220] Checking for updates...
	I0918 19:37:20.635920  776986 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 19:37:20.638257  776986 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	I0918 19:37:20.640535  776986 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 19:37:20.642625  776986 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:37:20.645053  776986 config.go:182] Loaded profile config "running-upgrade-152014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0918 19:37:20.647901  776986 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I0918 19:37:20.649941  776986 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 19:37:20.708807  776986 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0918 19:37:20.711061  776986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:37:20.828102  776986 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:54 SystemTime:2023-09-18 19:37:20.816444728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 19:37:20.828215  776986 docker.go:294] overlay module found
	I0918 19:37:20.830594  776986 out.go:177] * Using the docker driver based on existing profile
	I0918 19:37:20.832333  776986 start.go:298] selected driver: docker
	I0918 19:37:20.832357  776986 start.go:902] validating driver "docker" against &{Name:running-upgrade-152014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-152014 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.207 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0918 19:37:20.832459  776986 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:37:20.833278  776986 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:37:20.850694  776986 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0918 19:37:20.909682  776986 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:54 SystemTime:2023-09-18 19:37:20.899327478 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 19:37:20.909998  776986 cni.go:84] Creating CNI manager for ""
	I0918 19:37:20.910018  776986 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0918 19:37:20.910032  776986 start_flags.go:321] config:
	{Name:running-upgrade-152014 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-152014 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.207 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0918 19:37:20.914947  776986 out.go:177] * Starting control plane node running-upgrade-152014 in cluster running-upgrade-152014
	I0918 19:37:20.916967  776986 cache.go:122] Beginning downloading kic base image for docker with crio
	I0918 19:37:20.919022  776986 out.go:177] * Pulling base image ...
	I0918 19:37:20.920950  776986 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0918 19:37:20.921024  776986 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0918 19:37:20.940214  776986 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0918 19:37:20.940239  776986 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0918 19:37:21.006898  776986 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0918 19:37:21.007094  776986 profile.go:148] Saving config to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/running-upgrade-152014/config.json ...
	I0918 19:37:21.007399  776986 cache.go:195] Successfully downloaded all kic artifacts
	I0918 19:37:21.007455  776986 start.go:365] acquiring machines lock for running-upgrade-152014: {Name:mk79bdb43c879a4758f5cd938bd34557a995c844 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:37:21.007521  776986 start.go:369] acquired machines lock for "running-upgrade-152014" in 41.19µs
	I0918 19:37:21.007535  776986 start.go:96] Skipping create...Using existing machine configuration
	I0918 19:37:21.007541  776986 fix.go:54] fixHost starting: 
	I0918 19:37:21.007992  776986 cli_runner.go:164] Run: docker container inspect running-upgrade-152014 --format={{.State.Status}}
	I0918 19:37:21.008108  776986 cache.go:107] acquiring lock: {Name:mk0448c5eaa4bfb73be4e8a89c51ab7eac017bb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:37:21.008196  776986 cache.go:115] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0918 19:37:21.008211  776986 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 153.346µs
	I0918 19:37:21.008238  776986 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0918 19:37:21.008248  776986 cache.go:107] acquiring lock: {Name:mkb1bcf2c79ad5dd72a21a8060abf258158deb3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:37:21.008282  776986 cache.go:115] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0918 19:37:21.008287  776986 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 41.601µs
	I0918 19:37:21.008294  776986 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0918 19:37:21.008310  776986 cache.go:107] acquiring lock: {Name:mk94436bfb3f6c5c3b54b56ba14852d3d7287ff8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:37:21.008342  776986 cache.go:115] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0918 19:37:21.008347  776986 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 47.303µs
	I0918 19:37:21.008357  776986 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0918 19:37:21.008364  776986 cache.go:107] acquiring lock: {Name:mkae3ea1402778dbccc66e56bdf38eb58069c2ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:37:21.008390  776986 cache.go:115] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0918 19:37:21.008395  776986 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 32.353µs
	I0918 19:37:21.008401  776986 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0918 19:37:21.008408  776986 cache.go:107] acquiring lock: {Name:mkda06880cf9e7b8736499fc8bc2f400a2112f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:37:21.008449  776986 cache.go:115] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0918 19:37:21.008454  776986 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 46.753µs
	I0918 19:37:21.008460  776986 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0918 19:37:21.008471  776986 cache.go:107] acquiring lock: {Name:mk8176c8ba014cd239d9c555b9081841f8d76309 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:37:21.008497  776986 cache.go:115] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0918 19:37:21.008504  776986 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 34.182µs
	I0918 19:37:21.008510  776986 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0918 19:37:21.008519  776986 cache.go:107] acquiring lock: {Name:mkba817b4e85381a0702f3b894daea208c197c81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:37:21.008544  776986 cache.go:115] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0918 19:37:21.008548  776986 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 30.597µs
	I0918 19:37:21.008554  776986 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0918 19:37:21.008580  776986 cache.go:107] acquiring lock: {Name:mk1f57987a30fd148d99b1a6ecc02f44f5054f82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:37:21.008608  776986 cache.go:115] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0918 19:37:21.008613  776986 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 34.084µs
	I0918 19:37:21.008619  776986 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0918 19:37:21.008626  776986 cache.go:87] Successfully saved all images to host disk.
	I0918 19:37:21.027849  776986 fix.go:102] recreateIfNeeded on running-upgrade-152014: state=Running err=<nil>
	W0918 19:37:21.027906  776986 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 19:37:21.031117  776986 out.go:177] * Updating the running docker "running-upgrade-152014" container ...
	I0918 19:37:21.033252  776986 machine.go:88] provisioning docker machine ...
	I0918 19:37:21.033290  776986 ubuntu.go:169] provisioning hostname "running-upgrade-152014"
	I0918 19:37:21.033396  776986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-152014
	I0918 19:37:21.054914  776986 main.go:141] libmachine: Using SSH client type: native
	I0918 19:37:21.055498  776986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33606 <nil> <nil>}
	I0918 19:37:21.055517  776986 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-152014 && echo "running-upgrade-152014" | sudo tee /etc/hostname
	I0918 19:37:21.213053  776986 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-152014
	
	I0918 19:37:21.213132  776986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-152014
	I0918 19:37:21.233076  776986 main.go:141] libmachine: Using SSH client type: native
	I0918 19:37:21.233499  776986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33606 <nil> <nil>}
	I0918 19:37:21.233523  776986 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-152014' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-152014/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-152014' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 19:37:21.381807  776986 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 19:37:21.381836  776986 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17263-642665/.minikube CaCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17263-642665/.minikube}
	I0918 19:37:21.381856  776986 ubuntu.go:177] setting up certificates
	I0918 19:37:21.381866  776986 provision.go:83] configureAuth start
	I0918 19:37:21.381934  776986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-152014
	I0918 19:37:21.401323  776986 provision.go:138] copyHostCerts
	I0918 19:37:21.401396  776986 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem, removing ...
	I0918 19:37:21.401414  776986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem
	I0918 19:37:21.401489  776986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem (1082 bytes)
	I0918 19:37:21.401613  776986 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem, removing ...
	I0918 19:37:21.401626  776986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem
	I0918 19:37:21.401654  776986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem (1123 bytes)
	I0918 19:37:21.401715  776986 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem, removing ...
	I0918 19:37:21.401723  776986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem
	I0918 19:37:21.401747  776986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem (1675 bytes)
	I0918 19:37:21.401796  776986 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-152014 san=[192.168.70.207 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-152014]
	I0918 19:37:21.945519  776986 provision.go:172] copyRemoteCerts
	I0918 19:37:21.945595  776986 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 19:37:21.945642  776986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-152014
	I0918 19:37:21.971546  776986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33606 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/running-upgrade-152014/id_rsa Username:docker}
	I0918 19:37:22.075813  776986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 19:37:22.108844  776986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0918 19:37:22.144232  776986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 19:37:22.175062  776986 provision.go:86] duration metric: configureAuth took 793.18236ms
	I0918 19:37:22.175128  776986 ubuntu.go:193] setting minikube options for container-runtime
	I0918 19:37:22.175350  776986 config.go:182] Loaded profile config "running-upgrade-152014": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0918 19:37:22.175478  776986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-152014
	I0918 19:37:22.203121  776986 main.go:141] libmachine: Using SSH client type: native
	I0918 19:37:22.203578  776986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33606 <nil> <nil>}
	I0918 19:37:22.203594  776986 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 19:37:22.863709  776986 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 19:37:22.863730  776986 machine.go:91] provisioned docker machine in 1.830453033s
	I0918 19:37:22.863740  776986 start.go:300] post-start starting for "running-upgrade-152014" (driver="docker")
	I0918 19:37:22.863751  776986 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 19:37:22.863866  776986 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 19:37:22.863905  776986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-152014
	I0918 19:37:22.883898  776986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33606 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/running-upgrade-152014/id_rsa Username:docker}
	I0918 19:37:22.994148  776986 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 19:37:22.998270  776986 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0918 19:37:22.998300  776986 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0918 19:37:22.998311  776986 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0918 19:37:22.998335  776986 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0918 19:37:22.998353  776986 filesync.go:126] Scanning /home/jenkins/minikube-integration/17263-642665/.minikube/addons for local assets ...
	I0918 19:37:22.998424  776986 filesync.go:126] Scanning /home/jenkins/minikube-integration/17263-642665/.minikube/files for local assets ...
	I0918 19:37:22.998505  776986 filesync.go:149] local asset: /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem -> 6480032.pem in /etc/ssl/certs
	I0918 19:37:22.998609  776986 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 19:37:23.010053  776986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem --> /etc/ssl/certs/6480032.pem (1708 bytes)
	I0918 19:37:23.036652  776986 start.go:303] post-start completed in 172.895506ms
	I0918 19:37:23.036749  776986 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 19:37:23.036801  776986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-152014
	I0918 19:37:23.057247  776986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33606 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/running-upgrade-152014/id_rsa Username:docker}
	I0918 19:37:23.155435  776986 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0918 19:37:23.161177  776986 fix.go:56] fixHost completed within 2.153625981s
	I0918 19:37:23.161203  776986 start.go:83] releasing machines lock for "running-upgrade-152014", held for 2.153672913s
	I0918 19:37:23.161275  776986 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-152014
	I0918 19:37:23.187760  776986 ssh_runner.go:195] Run: cat /version.json
	I0918 19:37:23.187848  776986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-152014
	I0918 19:37:23.188082  776986 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 19:37:23.188135  776986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-152014
	I0918 19:37:23.239639  776986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33606 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/running-upgrade-152014/id_rsa Username:docker}
	I0918 19:37:23.241013  776986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33606 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/running-upgrade-152014/id_rsa Username:docker}
	W0918 19:37:23.382591  776986 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0918 19:37:23.382743  776986 ssh_runner.go:195] Run: systemctl --version
	I0918 19:37:23.503281  776986 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 19:37:23.675617  776986 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0918 19:37:23.682054  776986 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 19:37:23.708440  776986 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0918 19:37:23.708533  776986 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 19:37:23.741946  776986 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 19:37:23.742017  776986 start.go:469] detecting cgroup driver to use...
	I0918 19:37:23.742066  776986 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0918 19:37:23.742154  776986 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 19:37:23.773294  776986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 19:37:23.785578  776986 docker.go:196] disabling cri-docker service (if available) ...
	I0918 19:37:23.785653  776986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 19:37:23.799481  776986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 19:37:23.812356  776986 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0918 19:37:23.825620  776986 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0918 19:37:23.825687  776986 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 19:37:24.012292  776986 docker.go:212] disabling docker service ...
	I0918 19:37:24.012390  776986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 19:37:24.029613  776986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 19:37:24.045667  776986 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 19:37:24.202719  776986 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 19:37:24.353632  776986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 19:37:24.366190  776986 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 19:37:24.384516  776986 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0918 19:37:24.384584  776986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:37:24.403101  776986 out.go:177] 
	W0918 19:37:24.405278  776986 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0918 19:37:24.405299  776986 out.go:239] * 
	* 
	W0918 19:37:24.406401  776986 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 19:37:24.409126  776986 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-152014 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-09-18 19:37:24.443430218 +0000 UTC m=+2558.377459024
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-152014
helpers_test.go:235: (dbg) docker inspect running-upgrade-152014:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7239966bea6c5d3158fc5c211c31777ff4d2c83325a11d2ac933a6f71cc5f165",
	        "Created": "2023-09-18T19:36:33.552371592Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 773506,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-18T19:36:34.153638734Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/7239966bea6c5d3158fc5c211c31777ff4d2c83325a11d2ac933a6f71cc5f165/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7239966bea6c5d3158fc5c211c31777ff4d2c83325a11d2ac933a6f71cc5f165/hostname",
	        "HostsPath": "/var/lib/docker/containers/7239966bea6c5d3158fc5c211c31777ff4d2c83325a11d2ac933a6f71cc5f165/hosts",
	        "LogPath": "/var/lib/docker/containers/7239966bea6c5d3158fc5c211c31777ff4d2c83325a11d2ac933a6f71cc5f165/7239966bea6c5d3158fc5c211c31777ff4d2c83325a11d2ac933a6f71cc5f165-json.log",
	        "Name": "/running-upgrade-152014",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-152014:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-152014",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0c9549c82f0dd801d4045d1b46fecaf5224b6a4ebadcfa93e23f991dbc4ff80a-init/diff:/var/lib/docker/overlay2/f38cb2bd8f69b1eb64f55371998f51b07a602dc5334444b95a7bdd04edf816e8/diff:/var/lib/docker/overlay2/0aaec6b4aa94a6f721d4a9fb1f65173b6f6aee43c4e800726c1c70baf0325b21/diff:/var/lib/docker/overlay2/2142398ec7df920c5cc468a82ee6993b78d7335448da1ae2ef355cb39116d741/diff:/var/lib/docker/overlay2/30a3be5bc846eeb5a6692007014edec3fa9677d08a939afd790d818c18efd1e8/diff:/var/lib/docker/overlay2/74f159bfa9382574c197512d4f82f34d80f8e80cffc78a0db5dbdb11060dd6d7/diff:/var/lib/docker/overlay2/b82d499402d99f04bfa346de14f8b9e579efe2fb6150414c0eccfb58189bcc34/diff:/var/lib/docker/overlay2/096daf5936db020d69c4e9a2c06005b1ee70e2cd3ddf732a91a5f90c7fdff28e/diff:/var/lib/docker/overlay2/cba464c515e70541d9b9a4ee70f86a65c18f36b9810f264bce44790bfc20ac27/diff:/var/lib/docker/overlay2/586a45ebe322a9b8d40d94fb09bf5d4f744c99b24e501dbc1d2f5a1d5e1938a3/diff:/var/lib/docker/overlay2/080b00
22360c98c966b0325b816b071d20af80d94c808b525ba678aa67dfd158/diff:/var/lib/docker/overlay2/ac4c3e51870fa72f38a785e19cf695de88c8e82a54e2cab5aee911cbeae0e86d/diff:/var/lib/docker/overlay2/a7ab60c06bdca3a6b01e18243341ec875440948952bc85696528133215f03c17/diff:/var/lib/docker/overlay2/360a2898bcae4a29341d06e3704037123a63b491046b0c8fed5d2b1a91a6cf58/diff:/var/lib/docker/overlay2/8ab5564834c8bd80137eef2ad1604e42b5aa0314b3c980e32f455a9f07dd2906/diff:/var/lib/docker/overlay2/b9decda59f4d7015ddc8d536f5ccc98597a7a2ef5ed47b5337c5f57439042629/diff:/var/lib/docker/overlay2/9b6e283249c82b7b4299face9586f7c3e3dc34425be42f574e5220656d292e42/diff:/var/lib/docker/overlay2/4d8c84812f2b05a66689ce904f1bc13eaad25dea3bf8a02249459d972b5b0ddc/diff:/var/lib/docker/overlay2/6240cc9c963fdd9fba376a2884326e684a11068ee94dcca546948ba2d05029c3/diff:/var/lib/docker/overlay2/8881bcf0ef342f5de4e7b3683b98b4c0acf1464d83aed8ba4dd5f4bce6f6dbc3/diff:/var/lib/docker/overlay2/6055fb6358947e6d134935e22458fbb850c4c8b4c79fd120aa53b36930c87dce/diff:/var/lib/d
ocker/overlay2/7f5f53810b2c30181eafdcd4eb85e3b7a15f432d61f7b0cdeddf79af022d09a1/diff:/var/lib/docker/overlay2/c4adde5a09e751f8203007dcca711e44463b3b5356f0615af97b91c1de661dee/diff:/var/lib/docker/overlay2/56328cb4a756e8f7d5cf2ef661400a956ab9281ce19ad385ce1821d349d2f034/diff:/var/lib/docker/overlay2/25e035ace628fdd932de27fc45d1206c9a5ae178951e707978d1c17a6a54ec50/diff:/var/lib/docker/overlay2/a64f32516179416bcc0aaa30b4d0b1a221ad3ea4f2a79bced9473d1bf1df456a/diff:/var/lib/docker/overlay2/cc4ac13fa37076d01752fc96a6c69f79f7ce2ecd7fb303a98d33c205ced2322a/diff:/var/lib/docker/overlay2/57dbf8fe108f791e720c50f57976766b9a40706752e6d566e479e74c38a982fd/diff:/var/lib/docker/overlay2/c272e08f80e4ec66f8034abf1e45e9f96ff4f17b5f50102f70530b8b24c74edf/diff:/var/lib/docker/overlay2/9e1cf4e7614d8df16d085c7a3aeabdf4bccb0d12282a1bdb1ffc0c42608c673e/diff:/var/lib/docker/overlay2/3de07e1b0fa586e7434d860e73bcb923d7416f46ec3a36c2d68e83fd5231723b/diff:/var/lib/docker/overlay2/fff86ead8d3a0aeaf435b7793db5f4bf24921581e1fb42b3033c32cad47
1b4c4/diff:/var/lib/docker/overlay2/832675e8cdbb63c2159734ce1abaf600c7124223fa650b82909307cd1221c647/diff:/var/lib/docker/overlay2/9b6294afcb929bed36c1d7aa103574b01a64f3ecba7bf77c963dc47ec01f756f/diff:/var/lib/docker/overlay2/37ac1555a896aede9fa776861811e984b669716c336d09f946ddb26e32d800c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0c9549c82f0dd801d4045d1b46fecaf5224b6a4ebadcfa93e23f991dbc4ff80a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0c9549c82f0dd801d4045d1b46fecaf5224b6a4ebadcfa93e23f991dbc4ff80a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0c9549c82f0dd801d4045d1b46fecaf5224b6a4ebadcfa93e23f991dbc4ff80a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-152014",
	                "Source": "/var/lib/docker/volumes/running-upgrade-152014/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-152014",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-152014",
	                "name.minikube.sigs.k8s.io": "running-upgrade-152014",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "81441220bcff36a41e772deb3ce22f96bf8019cdfe9c70469e1d92a8ab3cdf3e",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33606"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33605"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33604"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33603"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/81441220bcff",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-152014": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.207"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "7239966bea6c",
	                        "running-upgrade-152014"
	                    ],
	                    "NetworkID": "ca8e61656f0d3266b54dc748518c045504b309a4187efec2445b1c618da58d47",
	                    "EndpointID": "dc07521453088b4272ba1ee88ea0ead751764f74672a84141995877a67551525",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.207",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:cf",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-152014 -n running-upgrade-152014
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-152014 -n running-upgrade-152014: exit status 4 (573.827101ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 19:37:24.878798  777677 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-152014" does not appear in /home/jenkins/minikube-integration/17263-642665/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-152014" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-152014" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-152014
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-152014: (3.064176236s)
--- FAIL: TestRunningBinaryUpgrade (69.76s)

                                                
                                    
x
+
TestMissingContainerUpgrade (172.61s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.860430153.exe start -p missing-upgrade-637091 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.860430153.exe start -p missing-upgrade-637091 --memory=2200 --driver=docker  --container-runtime=crio: (2m8.8585946s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-637091
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-637091: (1.953266227s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-637091
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-637091 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-637091 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (38.105485138s)

                                                
                                                
-- stdout --
	* [missing-upgrade-637091] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-637091 in cluster missing-upgrade-637091
	* Pulling base image ...
	* docker "missing-upgrade-637091" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 19:34:15.201841  764615 out.go:296] Setting OutFile to fd 1 ...
	I0918 19:34:15.202159  764615 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 19:34:15.202190  764615 out.go:309] Setting ErrFile to fd 2...
	I0918 19:34:15.202209  764615 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 19:34:15.202530  764615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17263-642665/.minikube/bin
	I0918 19:34:15.203008  764615 out.go:303] Setting JSON to false
	I0918 19:34:15.204255  764615 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11801,"bootTime":1695053855,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0918 19:34:15.204365  764615 start.go:138] virtualization:  
	I0918 19:34:15.208107  764615 out.go:177] * [missing-upgrade-637091] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0918 19:34:15.210533  764615 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 19:34:15.210629  764615 notify.go:220] Checking for updates...
	I0918 19:34:15.213426  764615 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:34:15.215404  764615 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 19:34:15.217455  764615 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	I0918 19:34:15.219374  764615 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 19:34:15.221118  764615 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:34:15.223465  764615 config.go:182] Loaded profile config "missing-upgrade-637091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0918 19:34:15.225996  764615 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I0918 19:34:15.227758  764615 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 19:34:15.284752  764615 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0918 19:34:15.284925  764615 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:34:15.387324  764615 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:45 SystemTime:2023-09-18 19:34:15.377198208 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 19:34:15.387427  764615 docker.go:294] overlay module found
	I0918 19:34:15.390247  764615 out.go:177] * Using the docker driver based on existing profile
	I0918 19:34:15.392621  764615 start.go:298] selected driver: docker
	I0918 19:34:15.392640  764615 start.go:902] validating driver "docker" against &{Name:missing-upgrade-637091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-637091 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.164 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0918 19:34:15.392751  764615 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:34:15.393357  764615 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:34:15.512376  764615 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:45 SystemTime:2023-09-18 19:34:15.502425058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 19:34:15.512668  764615 cni.go:84] Creating CNI manager for ""
	I0918 19:34:15.512680  764615 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0918 19:34:15.512691  764615 start_flags.go:321] config:
	{Name:missing-upgrade-637091 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-637091 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.164 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0918 19:34:15.515914  764615 out.go:177] * Starting control plane node missing-upgrade-637091 in cluster missing-upgrade-637091
	I0918 19:34:15.518194  764615 cache.go:122] Beginning downloading kic base image for docker with crio
	I0918 19:34:15.522026  764615 out.go:177] * Pulling base image ...
	I0918 19:34:15.524097  764615 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0918 19:34:15.524283  764615 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0918 19:34:15.546882  764615 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I0918 19:34:15.547042  764615 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I0918 19:34:15.547680  764615 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W0918 19:34:15.610424  764615 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0918 19:34:15.610574  764615 profile.go:148] Saving config to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/missing-upgrade-637091/config.json ...
	I0918 19:34:15.610932  764615 cache.go:107] acquiring lock: {Name:mk0448c5eaa4bfb73be4e8a89c51ab7eac017bb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:34:15.611007  764615 cache.go:115] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0918 19:34:15.611053  764615 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 93.375µs
	I0918 19:34:15.611069  764615 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0918 19:34:15.611078  764615 cache.go:107] acquiring lock: {Name:mkb1bcf2c79ad5dd72a21a8060abf258158deb3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:34:15.611171  764615 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I0918 19:34:15.611349  764615 cache.go:107] acquiring lock: {Name:mk94436bfb3f6c5c3b54b56ba14852d3d7287ff8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:34:15.611425  764615 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0918 19:34:15.611501  764615 cache.go:107] acquiring lock: {Name:mkae3ea1402778dbccc66e56bdf38eb58069c2ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:34:15.611565  764615 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I0918 19:34:15.611638  764615 cache.go:107] acquiring lock: {Name:mkda06880cf9e7b8736499fc8bc2f400a2112f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:34:15.611699  764615 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I0918 19:34:15.611766  764615 cache.go:107] acquiring lock: {Name:mk8176c8ba014cd239d9c555b9081841f8d76309 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:34:15.611850  764615 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0918 19:34:15.611923  764615 cache.go:107] acquiring lock: {Name:mkba817b4e85381a0702f3b894daea208c197c81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:34:15.611990  764615 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0918 19:34:15.612064  764615 cache.go:107] acquiring lock: {Name:mk1f57987a30fd148d99b1a6ecc02f44f5054f82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:34:15.612123  764615 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0918 19:34:15.616331  764615 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	I0918 19:34:15.617373  764615 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I0918 19:34:15.617789  764615 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I0918 19:34:15.617897  764615 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I0918 19:34:15.618003  764615 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0918 19:34:15.618213  764615 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0918 19:34:15.618662  764615 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0918 19:34:16.044169  764615 cache.go:162] opening:  /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	I0918 19:34:16.061920  764615 cache.go:162] opening:  /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	W0918 19:34:16.063418  764615 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I0918 19:34:16.063458  764615 cache.go:162] opening:  /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	W0918 19:34:16.079831  764615 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I0918 19:34:16.079909  764615 cache.go:162] opening:  /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	I0918 19:34:16.129215  764615 cache.go:162] opening:  /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0918 19:34:16.138228  764615 cache.go:162] opening:  /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	W0918 19:34:16.142430  764615 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I0918 19:34:16.142795  764615 cache.go:162] opening:  /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?I0918 19:34:16.274795  764615 cache.go:157] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0918 19:34:16.274826  764615 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 663.059569ms
	I0918 19:34:16.274840  764615 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  369.35 KiB / 287.99 MiB [] 0.13% ? p/s ?I0918 19:34:16.590460  764615 cache.go:157] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0918 19:34:16.590489  764615 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 978.426531ms
	I0918 19:34:16.590504  764615 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  11.83 MiB / 287.99 MiB [>] 4.11% ? p/s ?I0918 19:34:16.800635  764615 cache.go:157] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0918 19:34:16.800673  764615 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 1.189171197s
	I0918 19:34:16.800687  764615 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.14 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.14 MiB I0918 19:34:17.112788  764615 cache.go:157] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0918 19:34:17.112818  764615 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.501738762s
	I0918 19:34:17.112841  764615 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 43.14 MiB I0918 19:34:17.343455  764615 cache.go:157] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0918 19:34:17.343484  764615 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.732140869s
	I0918 19:34:17.343507  764615 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.94 MiB / 287.99 MiB  9.01% 40.36 MiB     > gcr.io/k8s-minikube/kicbase...:  37.39 MiB / 287.99 MiB  12.98% 40.36 MiB    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 40.36 MiBI0918 19:34:17.997507  764615 cache.go:157] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0918 19:34:17.997538  764615 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 2.385900389s
	I0918 19:34:17.997553  764615 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  51.03 MiB / 287.99 MiB  17.72% 40.45 MiB    > gcr.io/k8s-minikube/kicbase...:  67.72 MiB / 287.99 MiB  23.51% 40.45 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 40.45 MiB    > gcr.io/k8s-minikube/kicbase...:  80.03 MiB / 287.99 MiB  27.79% 40.96 MiB    > gcr.io/k8s-minikube/kicbase...:  91.79 MiB / 287.99 MiB  31.87% 40.96 MiB    > gcr.io/k8s-minikube/kicbase...:  101.37 MiB / 287.99 MiB  35.20% 40.96 Mi    > gcr.io/k8s-minikube/kicbase...:  107.79 MiB / 287.99 MiB  37.43% 41.30 Mi    > gcr.io/k8s-minikube/kicbase...:  121.95 MiB / 287.99 MiB  42.34% 41.30 Mi    > gcr.io/k8s-minikube/kicbase...:  132.86 MiB / 287.99 MiB  46.13% 41.30 Mi    > gcr.io/k8s-minikube/kicbase...:  147.79 MiB / 287.99 MiB  51.32% 42.91 Mi    > gcr.io/k8s-minikube/kicbase...:  158.93 MiB / 287.99 MiB  55.19% 42.91 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 42.91 Mi    > gcr.io/k8s-minikube/kicbase...:  178.62 MiB / 287.99 MiB  62.
02% 43.47 Mi    > gcr.io/k8s-minikube/kicbase...:  191.60 MiB / 287.99 MiB  66.53% 43.47 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 43.47 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 44.01 MiI0918 19:34:21.162181  764615 cache.go:157] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0918 19:34:21.162222  764615 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 5.550298115s
	I0918 19:34:21.162235  764615 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0918 19:34:21.162246  764615 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  224.84 MiB / 287.99 MiB  78.07% 44.01 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 44.01 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 44.22 Mi    > gcr.io/k8s-minikube/kicbase...:  246.06 MiB / 287.99 MiB  85.44% 44.22 Mi    > gcr.io/k8s-minikube/kicbase...:  260.51 MiB / 287.99 MiB  90.46% 44.22 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 44.27 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 44.27 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 44.27 Mi    > gcr.io/k8s-minikube/kicbase...:  274.21 MiB / 287.99 MiB  95.21% 42.40 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 42.40 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 42.40 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 41.14 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.
99% 41.14 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 41.14 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 38.49 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 38.49 M    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 38.49 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 34.74 MI0918 19:34:24.553935  764615 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I0918 19:34:24.553946  764615 cache.go:163] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I0918 19:34:25.785051  764615 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I0918 19:34:25.785093  764615 cache.go:195] Successfully downloaded all kic artifacts
	I0918 19:34:25.785178  764615 start.go:365] acquiring machines lock for missing-upgrade-637091: {Name:mk3aa927d80f9e225aabd8d3de1315662b7b2c65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:34:25.785240  764615 start.go:369] acquired machines lock for "missing-upgrade-637091" in 41.871µs
	I0918 19:34:25.785259  764615 start.go:96] Skipping create...Using existing machine configuration
	I0918 19:34:25.785267  764615 fix.go:54] fixHost starting: 
	I0918 19:34:25.785540  764615 cli_runner.go:164] Run: docker container inspect missing-upgrade-637091 --format={{.State.Status}}
	W0918 19:34:25.811451  764615 cli_runner.go:211] docker container inspect missing-upgrade-637091 --format={{.State.Status}} returned with exit code 1
	I0918 19:34:25.811550  764615 fix.go:102] recreateIfNeeded on missing-upgrade-637091: state= err=unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:25.811596  764615 fix.go:107] machineExists: false. err=machine does not exist
	I0918 19:34:25.846535  764615 out.go:177] * docker "missing-upgrade-637091" container is missing, will recreate.
	I0918 19:34:25.870329  764615 delete.go:124] DEMOLISHING missing-upgrade-637091 ...
	I0918 19:34:25.870435  764615 cli_runner.go:164] Run: docker container inspect missing-upgrade-637091 --format={{.State.Status}}
	W0918 19:34:25.895597  764615 cli_runner.go:211] docker container inspect missing-upgrade-637091 --format={{.State.Status}} returned with exit code 1
	W0918 19:34:25.895654  764615 stop.go:75] unable to get state: unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:25.895673  764615 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:25.896142  764615 cli_runner.go:164] Run: docker container inspect missing-upgrade-637091 --format={{.State.Status}}
	W0918 19:34:25.917350  764615 cli_runner.go:211] docker container inspect missing-upgrade-637091 --format={{.State.Status}} returned with exit code 1
	I0918 19:34:25.917433  764615 delete.go:82] Unable to get host status for missing-upgrade-637091, assuming it has already been deleted: state: unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:25.917503  764615 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-637091
	W0918 19:34:25.941497  764615 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-637091 returned with exit code 1
	I0918 19:34:25.941527  764615 kic.go:367] could not find the container missing-upgrade-637091 to remove it. will try anyways
	I0918 19:34:25.941581  764615 cli_runner.go:164] Run: docker container inspect missing-upgrade-637091 --format={{.State.Status}}
	W0918 19:34:25.961810  764615 cli_runner.go:211] docker container inspect missing-upgrade-637091 --format={{.State.Status}} returned with exit code 1
	W0918 19:34:25.961864  764615 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:25.961927  764615 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-637091 /bin/bash -c "sudo init 0"
	W0918 19:34:25.993658  764615 cli_runner.go:211] docker exec --privileged -t missing-upgrade-637091 /bin/bash -c "sudo init 0" returned with exit code 1
	I0918 19:34:25.993687  764615 oci.go:647] error shutdown missing-upgrade-637091: docker exec --privileged -t missing-upgrade-637091 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:26.993911  764615 cli_runner.go:164] Run: docker container inspect missing-upgrade-637091 --format={{.State.Status}}
	W0918 19:34:27.016199  764615 cli_runner.go:211] docker container inspect missing-upgrade-637091 --format={{.State.Status}} returned with exit code 1
	I0918 19:34:27.016273  764615 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:27.016286  764615 oci.go:661] temporary error: container missing-upgrade-637091 status is  but expect it to be exited
	I0918 19:34:27.016316  764615 retry.go:31] will retry after 272.778072ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:27.289887  764615 cli_runner.go:164] Run: docker container inspect missing-upgrade-637091 --format={{.State.Status}}
	W0918 19:34:27.311441  764615 cli_runner.go:211] docker container inspect missing-upgrade-637091 --format={{.State.Status}} returned with exit code 1
	I0918 19:34:27.311502  764615 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:27.311515  764615 oci.go:661] temporary error: container missing-upgrade-637091 status is  but expect it to be exited
	I0918 19:34:27.311540  764615 retry.go:31] will retry after 849.511708ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:28.161262  764615 cli_runner.go:164] Run: docker container inspect missing-upgrade-637091 --format={{.State.Status}}
	W0918 19:34:28.181528  764615 cli_runner.go:211] docker container inspect missing-upgrade-637091 --format={{.State.Status}} returned with exit code 1
	I0918 19:34:28.181588  764615 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:28.181601  764615 oci.go:661] temporary error: container missing-upgrade-637091 status is  but expect it to be exited
	I0918 19:34:28.181626  764615 retry.go:31] will retry after 1.492517911s: couldn't verify container is exited. %v: unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:29.674341  764615 cli_runner.go:164] Run: docker container inspect missing-upgrade-637091 --format={{.State.Status}}
	W0918 19:34:29.695891  764615 cli_runner.go:211] docker container inspect missing-upgrade-637091 --format={{.State.Status}} returned with exit code 1
	I0918 19:34:29.695955  764615 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:29.695969  764615 oci.go:661] temporary error: container missing-upgrade-637091 status is  but expect it to be exited
	I0918 19:34:29.695994  764615 retry.go:31] will retry after 1.01186053s: couldn't verify container is exited. %v: unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:30.708962  764615 cli_runner.go:164] Run: docker container inspect missing-upgrade-637091 --format={{.State.Status}}
	W0918 19:34:30.746144  764615 cli_runner.go:211] docker container inspect missing-upgrade-637091 --format={{.State.Status}} returned with exit code 1
	I0918 19:34:30.746206  764615 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:30.746215  764615 oci.go:661] temporary error: container missing-upgrade-637091 status is  but expect it to be exited
	I0918 19:34:30.746249  764615 retry.go:31] will retry after 1.893900022s: couldn't verify container is exited. %v: unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:32.640388  764615 cli_runner.go:164] Run: docker container inspect missing-upgrade-637091 --format={{.State.Status}}
	W0918 19:34:32.660273  764615 cli_runner.go:211] docker container inspect missing-upgrade-637091 --format={{.State.Status}} returned with exit code 1
	I0918 19:34:32.660332  764615 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:32.660345  764615 oci.go:661] temporary error: container missing-upgrade-637091 status is  but expect it to be exited
	I0918 19:34:32.660370  764615 retry.go:31] will retry after 1.944755805s: couldn't verify container is exited. %v: unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:34.605317  764615 cli_runner.go:164] Run: docker container inspect missing-upgrade-637091 --format={{.State.Status}}
	W0918 19:34:34.623870  764615 cli_runner.go:211] docker container inspect missing-upgrade-637091 --format={{.State.Status}} returned with exit code 1
	I0918 19:34:34.623931  764615 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:34.623945  764615 oci.go:661] temporary error: container missing-upgrade-637091 status is  but expect it to be exited
	I0918 19:34:34.623984  764615 retry.go:31] will retry after 7.498532908s: couldn't verify container is exited. %v: unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:42.122841  764615 cli_runner.go:164] Run: docker container inspect missing-upgrade-637091 --format={{.State.Status}}
	W0918 19:34:42.142775  764615 cli_runner.go:211] docker container inspect missing-upgrade-637091 --format={{.State.Status}} returned with exit code 1
	I0918 19:34:42.142843  764615 oci.go:659] temporary error verifying shutdown: unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	I0918 19:34:42.142867  764615 oci.go:661] temporary error: container missing-upgrade-637091 status is  but expect it to be exited
	I0918 19:34:42.142904  764615 oci.go:88] couldn't shut down missing-upgrade-637091 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-637091": docker container inspect missing-upgrade-637091 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-637091
	 
	I0918 19:34:42.142973  764615 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-637091
	I0918 19:34:42.164180  764615 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-637091
	W0918 19:34:42.183228  764615 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-637091 returned with exit code 1
	I0918 19:34:42.183343  764615 cli_runner.go:164] Run: docker network inspect missing-upgrade-637091 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 19:34:42.204385  764615 cli_runner.go:164] Run: docker network rm missing-upgrade-637091
	I0918 19:34:42.315833  764615 fix.go:114] Sleeping 1 second for extra luck!
	I0918 19:34:43.316056  764615 start.go:125] createHost starting for "" (driver="docker")
	I0918 19:34:43.322581  764615 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0918 19:34:43.322747  764615 start.go:159] libmachine.API.Create for "missing-upgrade-637091" (driver="docker")
	I0918 19:34:43.322775  764615 client.go:168] LocalClient.Create starting
	I0918 19:34:43.322871  764615 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem
	I0918 19:34:43.322910  764615 main.go:141] libmachine: Decoding PEM data...
	I0918 19:34:43.322931  764615 main.go:141] libmachine: Parsing certificate...
	I0918 19:34:43.322995  764615 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem
	I0918 19:34:43.323017  764615 main.go:141] libmachine: Decoding PEM data...
	I0918 19:34:43.323038  764615 main.go:141] libmachine: Parsing certificate...
	I0918 19:34:43.323287  764615 cli_runner.go:164] Run: docker network inspect missing-upgrade-637091 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0918 19:34:43.339625  764615 cli_runner.go:211] docker network inspect missing-upgrade-637091 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0918 19:34:43.339710  764615 network_create.go:281] running [docker network inspect missing-upgrade-637091] to gather additional debugging logs...
	I0918 19:34:43.339729  764615 cli_runner.go:164] Run: docker network inspect missing-upgrade-637091
	W0918 19:34:43.357879  764615 cli_runner.go:211] docker network inspect missing-upgrade-637091 returned with exit code 1
	I0918 19:34:43.357911  764615 network_create.go:284] error running [docker network inspect missing-upgrade-637091]: docker network inspect missing-upgrade-637091: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-637091 not found
	I0918 19:34:43.357927  764615 network_create.go:286] output of [docker network inspect missing-upgrade-637091]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-637091 not found
	
	** /stderr **
	I0918 19:34:43.357992  764615 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0918 19:34:43.378450  764615 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0d7b340fbd2d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:fc:f4:37:66} reservation:<nil>}
	I0918 19:34:43.378780  764615 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fb63e8abd7f0 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:c3:90:95:ad} reservation:<nil>}
	I0918 19:34:43.379199  764615 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000cdc250}
	I0918 19:34:43.379221  764615 network_create.go:123] attempt to create docker network missing-upgrade-637091 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0918 19:34:43.379281  764615 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-637091 missing-upgrade-637091
	I0918 19:34:43.450589  764615 network_create.go:107] docker network missing-upgrade-637091 192.168.67.0/24 created
	I0918 19:34:43.450621  764615 kic.go:117] calculated static IP "192.168.67.2" for the "missing-upgrade-637091" container
	I0918 19:34:43.450702  764615 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0918 19:34:43.467662  764615 cli_runner.go:164] Run: docker volume create missing-upgrade-637091 --label name.minikube.sigs.k8s.io=missing-upgrade-637091 --label created_by.minikube.sigs.k8s.io=true
	I0918 19:34:43.484242  764615 oci.go:103] Successfully created a docker volume missing-upgrade-637091
	I0918 19:34:43.484355  764615 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-637091-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-637091 --entrypoint /usr/bin/test -v missing-upgrade-637091:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I0918 19:34:44.045210  764615 oci.go:107] Successfully prepared a docker volume missing-upgrade-637091
	I0918 19:34:44.045242  764615 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W0918 19:34:44.045394  764615 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0918 19:34:44.045509  764615 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0918 19:34:44.114991  764615 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-637091 --name missing-upgrade-637091 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-637091 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-637091 --network missing-upgrade-637091 --ip 192.168.67.2 --volume missing-upgrade-637091:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I0918 19:34:44.461883  764615 cli_runner.go:164] Run: docker container inspect missing-upgrade-637091 --format={{.State.Running}}
	I0918 19:34:44.484178  764615 cli_runner.go:164] Run: docker container inspect missing-upgrade-637091 --format={{.State.Status}}
	I0918 19:34:44.513110  764615 cli_runner.go:164] Run: docker exec missing-upgrade-637091 stat /var/lib/dpkg/alternatives/iptables
	I0918 19:34:44.593777  764615 oci.go:144] the created container "missing-upgrade-637091" has a running status.
	I0918 19:34:44.593823  764615 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/missing-upgrade-637091/id_rsa...
	I0918 19:34:44.810969  764615 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17263-642665/.minikube/machines/missing-upgrade-637091/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0918 19:34:44.841299  764615 cli_runner.go:164] Run: docker container inspect missing-upgrade-637091 --format={{.State.Status}}
	I0918 19:34:44.878698  764615 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0918 19:34:44.878722  764615 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-637091 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0918 19:34:44.974239  764615 cli_runner.go:164] Run: docker container inspect missing-upgrade-637091 --format={{.State.Status}}
	I0918 19:34:44.996877  764615 machine.go:88] provisioning docker machine ...
	I0918 19:34:44.996907  764615 ubuntu.go:169] provisioning hostname "missing-upgrade-637091"
	I0918 19:34:44.996980  764615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-637091
	I0918 19:34:45.068187  764615 main.go:141] libmachine: Using SSH client type: native
	I0918 19:34:45.068956  764615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33594 <nil> <nil>}
	I0918 19:34:45.068978  764615 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-637091 && echo "missing-upgrade-637091" | sudo tee /etc/hostname
	I0918 19:34:45.073296  764615 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40030->127.0.0.1:33594: read: connection reset by peer
	I0918 19:34:48.229469  764615 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-637091
	
	I0918 19:34:48.229552  764615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-637091
	I0918 19:34:48.248264  764615 main.go:141] libmachine: Using SSH client type: native
	I0918 19:34:48.248679  764615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33594 <nil> <nil>}
	I0918 19:34:48.248702  764615 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-637091' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-637091/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-637091' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 19:34:48.388707  764615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 19:34:48.388733  764615 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17263-642665/.minikube CaCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17263-642665/.minikube}
	I0918 19:34:48.388750  764615 ubuntu.go:177] setting up certificates
	I0918 19:34:48.388760  764615 provision.go:83] configureAuth start
	I0918 19:34:48.388821  764615 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-637091
	I0918 19:34:48.406837  764615 provision.go:138] copyHostCerts
	I0918 19:34:48.406897  764615 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem, removing ...
	I0918 19:34:48.406905  764615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem
	I0918 19:34:48.406979  764615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem (1082 bytes)
	I0918 19:34:48.407087  764615 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem, removing ...
	I0918 19:34:48.407094  764615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem
	I0918 19:34:48.407122  764615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem (1123 bytes)
	I0918 19:34:48.407175  764615 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem, removing ...
	I0918 19:34:48.407179  764615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem
	I0918 19:34:48.407201  764615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem (1675 bytes)
	I0918 19:34:48.407242  764615 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-637091 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-637091]
	I0918 19:34:49.686109  764615 provision.go:172] copyRemoteCerts
	I0918 19:34:49.686185  764615 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 19:34:49.686230  764615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-637091
	I0918 19:34:49.709349  764615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33594 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/missing-upgrade-637091/id_rsa Username:docker}
	I0918 19:34:49.809086  764615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 19:34:49.832573  764615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0918 19:34:49.856090  764615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 19:34:49.878344  764615 provision.go:86] duration metric: configureAuth took 1.489569962s
	I0918 19:34:49.878367  764615 ubuntu.go:193] setting minikube options for container-runtime
	I0918 19:34:49.878557  764615 config.go:182] Loaded profile config "missing-upgrade-637091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0918 19:34:49.878660  764615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-637091
	I0918 19:34:49.896353  764615 main.go:141] libmachine: Using SSH client type: native
	I0918 19:34:49.896778  764615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33594 <nil> <nil>}
	I0918 19:34:49.896800  764615 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 19:34:50.317413  764615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 19:34:50.317435  764615 machine.go:91] provisioned docker machine in 5.320537829s
	I0918 19:34:50.317445  764615 client.go:171] LocalClient.Create took 6.994664164s
	I0918 19:34:50.317500  764615 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-637091" took 6.994754848s
	I0918 19:34:50.317513  764615 start.go:300] post-start starting for "missing-upgrade-637091" (driver="docker")
	I0918 19:34:50.317523  764615 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 19:34:50.317616  764615 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 19:34:50.317677  764615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-637091
	I0918 19:34:50.336329  764615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33594 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/missing-upgrade-637091/id_rsa Username:docker}
	I0918 19:34:50.437109  764615 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 19:34:50.441492  764615 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0918 19:34:50.441517  764615 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0918 19:34:50.441529  764615 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0918 19:34:50.441536  764615 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0918 19:34:50.441547  764615 filesync.go:126] Scanning /home/jenkins/minikube-integration/17263-642665/.minikube/addons for local assets ...
	I0918 19:34:50.441611  764615 filesync.go:126] Scanning /home/jenkins/minikube-integration/17263-642665/.minikube/files for local assets ...
	I0918 19:34:50.441701  764615 filesync.go:149] local asset: /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem -> 6480032.pem in /etc/ssl/certs
	I0918 19:34:50.441810  764615 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 19:34:50.450596  764615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem --> /etc/ssl/certs/6480032.pem (1708 bytes)
	I0918 19:34:50.473912  764615 start.go:303] post-start completed in 156.38239ms
	I0918 19:34:50.474281  764615 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-637091
	I0918 19:34:50.494295  764615 profile.go:148] Saving config to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/missing-upgrade-637091/config.json ...
	I0918 19:34:50.494584  764615 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 19:34:50.494629  764615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-637091
	I0918 19:34:50.512919  764615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33594 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/missing-upgrade-637091/id_rsa Username:docker}
	I0918 19:34:50.611307  764615 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0918 19:34:50.616771  764615 start.go:128] duration metric: createHost completed in 7.300645667s
	I0918 19:34:50.616884  764615 cli_runner.go:164] Run: docker container inspect missing-upgrade-637091 --format={{.State.Status}}
	W0918 19:34:50.640146  764615 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 19:34:50.640178  764615 machine.go:88] provisioning docker machine ...
	I0918 19:34:50.640195  764615 ubuntu.go:169] provisioning hostname "missing-upgrade-637091"
	I0918 19:34:50.640263  764615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-637091
	I0918 19:34:50.662840  764615 main.go:141] libmachine: Using SSH client type: native
	I0918 19:34:50.663280  764615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33594 <nil> <nil>}
	I0918 19:34:50.663298  764615 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-637091 && echo "missing-upgrade-637091" | sudo tee /etc/hostname
	I0918 19:34:50.839106  764615 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-637091
	
	I0918 19:34:50.839197  764615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-637091
	I0918 19:34:50.867454  764615 main.go:141] libmachine: Using SSH client type: native
	I0918 19:34:50.868044  764615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33594 <nil> <nil>}
	I0918 19:34:50.868072  764615 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-637091' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-637091/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-637091' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 19:34:51.019429  764615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 19:34:51.019460  764615 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17263-642665/.minikube CaCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17263-642665/.minikube}
	I0918 19:34:51.019477  764615 ubuntu.go:177] setting up certificates
	I0918 19:34:51.019487  764615 provision.go:83] configureAuth start
	I0918 19:34:51.019595  764615 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-637091
	I0918 19:34:51.050887  764615 provision.go:138] copyHostCerts
	I0918 19:34:51.050985  764615 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem, removing ...
	I0918 19:34:51.051000  764615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem
	I0918 19:34:51.051103  764615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem (1082 bytes)
	I0918 19:34:51.051255  764615 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem, removing ...
	I0918 19:34:51.051264  764615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem
	I0918 19:34:51.051317  764615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem (1123 bytes)
	I0918 19:34:51.051406  764615 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem, removing ...
	I0918 19:34:51.051416  764615 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem
	I0918 19:34:51.051457  764615 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem (1675 bytes)
	I0918 19:34:51.065124  764615 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-637091 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-637091]
	I0918 19:34:51.419991  764615 provision.go:172] copyRemoteCerts
	I0918 19:34:51.420075  764615 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 19:34:51.420137  764615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-637091
	I0918 19:34:51.451216  764615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33594 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/missing-upgrade-637091/id_rsa Username:docker}
	I0918 19:34:51.557128  764615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0918 19:34:51.580871  764615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 19:34:51.606656  764615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 19:34:51.641319  764615 provision.go:86] duration metric: configureAuth took 621.816993ms
	I0918 19:34:51.641352  764615 ubuntu.go:193] setting minikube options for container-runtime
	I0918 19:34:51.641580  764615 config.go:182] Loaded profile config "missing-upgrade-637091": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0918 19:34:51.641714  764615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-637091
	I0918 19:34:51.666475  764615 main.go:141] libmachine: Using SSH client type: native
	I0918 19:34:51.666957  764615 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33594 <nil> <nil>}
	I0918 19:34:51.666981  764615 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 19:34:51.973227  764615 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 19:34:51.973309  764615 machine.go:91] provisioned docker machine in 1.333123311s
	I0918 19:34:51.973332  764615 start.go:300] post-start starting for "missing-upgrade-637091" (driver="docker")
	I0918 19:34:51.973368  764615 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 19:34:51.973454  764615 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 19:34:51.973523  764615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-637091
	I0918 19:34:51.993323  764615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33594 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/missing-upgrade-637091/id_rsa Username:docker}
	I0918 19:34:52.097510  764615 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 19:34:52.102001  764615 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0918 19:34:52.102025  764615 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0918 19:34:52.102036  764615 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0918 19:34:52.102044  764615 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0918 19:34:52.102055  764615 filesync.go:126] Scanning /home/jenkins/minikube-integration/17263-642665/.minikube/addons for local assets ...
	I0918 19:34:52.102112  764615 filesync.go:126] Scanning /home/jenkins/minikube-integration/17263-642665/.minikube/files for local assets ...
	I0918 19:34:52.102192  764615 filesync.go:149] local asset: /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem -> 6480032.pem in /etc/ssl/certs
	I0918 19:34:52.102296  764615 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 19:34:52.111907  764615 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem --> /etc/ssl/certs/6480032.pem (1708 bytes)
	I0918 19:34:52.135725  764615 start.go:303] post-start completed in 162.364188ms
	I0918 19:34:52.135834  764615 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 19:34:52.135884  764615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-637091
	I0918 19:34:52.154280  764615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33594 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/missing-upgrade-637091/id_rsa Username:docker}
	I0918 19:34:52.250023  764615 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0918 19:34:52.255874  764615 fix.go:56] fixHost completed within 26.470599318s
	I0918 19:34:52.255896  764615 start.go:83] releasing machines lock for "missing-upgrade-637091", held for 26.470648131s
	I0918 19:34:52.255965  764615 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-637091
	I0918 19:34:52.277101  764615 ssh_runner.go:195] Run: cat /version.json
	I0918 19:34:52.277155  764615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-637091
	I0918 19:34:52.277161  764615 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 19:34:52.277219  764615 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-637091
	I0918 19:34:52.296201  764615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33594 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/missing-upgrade-637091/id_rsa Username:docker}
	I0918 19:34:52.309325  764615 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33594 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/missing-upgrade-637091/id_rsa Username:docker}
	W0918 19:34:52.547461  764615 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0918 19:34:52.547604  764615 ssh_runner.go:195] Run: systemctl --version
	I0918 19:34:52.553110  764615 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 19:34:52.662850  764615 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0918 19:34:52.668748  764615 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 19:34:52.694183  764615 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0918 19:34:52.694271  764615 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 19:34:52.733160  764615 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 19:34:52.733181  764615 start.go:469] detecting cgroup driver to use...
	I0918 19:34:52.733214  764615 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0918 19:34:52.733266  764615 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 19:34:52.759247  764615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 19:34:52.771304  764615 docker.go:196] disabling cri-docker service (if available) ...
	I0918 19:34:52.771418  764615 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 19:34:52.783030  764615 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 19:34:52.795385  764615 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0918 19:34:52.808939  764615 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0918 19:34:52.809027  764615 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 19:34:52.922633  764615 docker.go:212] disabling docker service ...
	I0918 19:34:52.922742  764615 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 19:34:52.935653  764615 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 19:34:52.948341  764615 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 19:34:53.057779  764615 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 19:34:53.170576  764615 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 19:34:53.182629  764615 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 19:34:53.199849  764615 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0918 19:34:53.199918  764615 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:34:53.213205  764615 out.go:177] 
	W0918 19:34:53.215446  764615 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0918 19:34:53.215467  764615 out.go:239] * 
	* 
	W0918 19:34:53.216524  764615 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 19:34:53.219240  764615 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-637091 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2023-09-18 19:34:53.263214388 +0000 UTC m=+2407.197243202
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-637091
helpers_test.go:235: (dbg) docker inspect missing-upgrade-637091:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6de9252e2d7f1bd29d418f4db494e5442b2bc3880a45403a9886b73d4f16a0d6",
	        "Created": "2023-09-18T19:34:44.13239416Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 765822,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-18T19:34:44.452265828Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/6de9252e2d7f1bd29d418f4db494e5442b2bc3880a45403a9886b73d4f16a0d6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6de9252e2d7f1bd29d418f4db494e5442b2bc3880a45403a9886b73d4f16a0d6/hostname",
	        "HostsPath": "/var/lib/docker/containers/6de9252e2d7f1bd29d418f4db494e5442b2bc3880a45403a9886b73d4f16a0d6/hosts",
	        "LogPath": "/var/lib/docker/containers/6de9252e2d7f1bd29d418f4db494e5442b2bc3880a45403a9886b73d4f16a0d6/6de9252e2d7f1bd29d418f4db494e5442b2bc3880a45403a9886b73d4f16a0d6-json.log",
	        "Name": "/missing-upgrade-637091",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-637091:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-637091",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2b3ac152d310fb1ce86a7f3268b90897ad9693854490262d3ce9dfcde273855d-init/diff:/var/lib/docker/overlay2/f38cb2bd8f69b1eb64f55371998f51b07a602dc5334444b95a7bdd04edf816e8/diff:/var/lib/docker/overlay2/0aaec6b4aa94a6f721d4a9fb1f65173b6f6aee43c4e800726c1c70baf0325b21/diff:/var/lib/docker/overlay2/2142398ec7df920c5cc468a82ee6993b78d7335448da1ae2ef355cb39116d741/diff:/var/lib/docker/overlay2/30a3be5bc846eeb5a6692007014edec3fa9677d08a939afd790d818c18efd1e8/diff:/var/lib/docker/overlay2/74f159bfa9382574c197512d4f82f34d80f8e80cffc78a0db5dbdb11060dd6d7/diff:/var/lib/docker/overlay2/b82d499402d99f04bfa346de14f8b9e579efe2fb6150414c0eccfb58189bcc34/diff:/var/lib/docker/overlay2/096daf5936db020d69c4e9a2c06005b1ee70e2cd3ddf732a91a5f90c7fdff28e/diff:/var/lib/docker/overlay2/cba464c515e70541d9b9a4ee70f86a65c18f36b9810f264bce44790bfc20ac27/diff:/var/lib/docker/overlay2/586a45ebe322a9b8d40d94fb09bf5d4f744c99b24e501dbc1d2f5a1d5e1938a3/diff:/var/lib/docker/overlay2/080b00
22360c98c966b0325b816b071d20af80d94c808b525ba678aa67dfd158/diff:/var/lib/docker/overlay2/ac4c3e51870fa72f38a785e19cf695de88c8e82a54e2cab5aee911cbeae0e86d/diff:/var/lib/docker/overlay2/a7ab60c06bdca3a6b01e18243341ec875440948952bc85696528133215f03c17/diff:/var/lib/docker/overlay2/360a2898bcae4a29341d06e3704037123a63b491046b0c8fed5d2b1a91a6cf58/diff:/var/lib/docker/overlay2/8ab5564834c8bd80137eef2ad1604e42b5aa0314b3c980e32f455a9f07dd2906/diff:/var/lib/docker/overlay2/b9decda59f4d7015ddc8d536f5ccc98597a7a2ef5ed47b5337c5f57439042629/diff:/var/lib/docker/overlay2/9b6e283249c82b7b4299face9586f7c3e3dc34425be42f574e5220656d292e42/diff:/var/lib/docker/overlay2/4d8c84812f2b05a66689ce904f1bc13eaad25dea3bf8a02249459d972b5b0ddc/diff:/var/lib/docker/overlay2/6240cc9c963fdd9fba376a2884326e684a11068ee94dcca546948ba2d05029c3/diff:/var/lib/docker/overlay2/8881bcf0ef342f5de4e7b3683b98b4c0acf1464d83aed8ba4dd5f4bce6f6dbc3/diff:/var/lib/docker/overlay2/6055fb6358947e6d134935e22458fbb850c4c8b4c79fd120aa53b36930c87dce/diff:/var/lib/d
ocker/overlay2/7f5f53810b2c30181eafdcd4eb85e3b7a15f432d61f7b0cdeddf79af022d09a1/diff:/var/lib/docker/overlay2/c4adde5a09e751f8203007dcca711e44463b3b5356f0615af97b91c1de661dee/diff:/var/lib/docker/overlay2/56328cb4a756e8f7d5cf2ef661400a956ab9281ce19ad385ce1821d349d2f034/diff:/var/lib/docker/overlay2/25e035ace628fdd932de27fc45d1206c9a5ae178951e707978d1c17a6a54ec50/diff:/var/lib/docker/overlay2/a64f32516179416bcc0aaa30b4d0b1a221ad3ea4f2a79bced9473d1bf1df456a/diff:/var/lib/docker/overlay2/cc4ac13fa37076d01752fc96a6c69f79f7ce2ecd7fb303a98d33c205ced2322a/diff:/var/lib/docker/overlay2/57dbf8fe108f791e720c50f57976766b9a40706752e6d566e479e74c38a982fd/diff:/var/lib/docker/overlay2/c272e08f80e4ec66f8034abf1e45e9f96ff4f17b5f50102f70530b8b24c74edf/diff:/var/lib/docker/overlay2/9e1cf4e7614d8df16d085c7a3aeabdf4bccb0d12282a1bdb1ffc0c42608c673e/diff:/var/lib/docker/overlay2/3de07e1b0fa586e7434d860e73bcb923d7416f46ec3a36c2d68e83fd5231723b/diff:/var/lib/docker/overlay2/fff86ead8d3a0aeaf435b7793db5f4bf24921581e1fb42b3033c32cad47
1b4c4/diff:/var/lib/docker/overlay2/832675e8cdbb63c2159734ce1abaf600c7124223fa650b82909307cd1221c647/diff:/var/lib/docker/overlay2/9b6294afcb929bed36c1d7aa103574b01a64f3ecba7bf77c963dc47ec01f756f/diff:/var/lib/docker/overlay2/37ac1555a896aede9fa776861811e984b669716c336d09f946ddb26e32d800c1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2b3ac152d310fb1ce86a7f3268b90897ad9693854490262d3ce9dfcde273855d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2b3ac152d310fb1ce86a7f3268b90897ad9693854490262d3ce9dfcde273855d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2b3ac152d310fb1ce86a7f3268b90897ad9693854490262d3ce9dfcde273855d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-637091",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-637091/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-637091",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-637091",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-637091",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "06d2bb74d41840fbe53aaf15dcdf675361a2339e282d694d2828b909f358da26",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33594"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33593"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33590"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33592"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33591"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/06d2bb74d418",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-637091": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6de9252e2d7f",
	                        "missing-upgrade-637091"
	                    ],
	                    "NetworkID": "63a5d6f98a5671180113764100a038bb0013ca6f059d2b73940065dc598426ba",
	                    "EndpointID": "5d7d223c98884ef0abf29eaafdcf1d6ae827c0e40bde23526389bae5fbbeb506",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-637091 -n missing-upgrade-637091
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-637091 -n missing-upgrade-637091: exit status 6 (328.594444ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 19:34:53.598624  766880 status.go:415] kubeconfig endpoint: got: 192.168.59.164:8443, want: 192.168.67.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-637091" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-637091" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-637091
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-637091: (2.025752003s)
--- FAIL: TestMissingContainerUpgrade (172.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (78.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.2188607658.exe start -p stopped-upgrade-311194 --memory=2200 --vm-driver=docker  --container-runtime=crio
E0918 19:35:14.596114  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.2188607658.exe start -p stopped-upgrade-311194 --memory=2200 --vm-driver=docker  --container-runtime=crio: (58.964149727s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.2188607658.exe -p stopped-upgrade-311194 stop
E0918 19:36:03.533124  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.2188607658.exe -p stopped-upgrade-311194 stop: (12.334559656s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-311194 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-311194 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (7.052398925s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-311194] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-311194 in cluster stopped-upgrade-311194
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-311194" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 19:36:08.223279  771154 out.go:296] Setting OutFile to fd 1 ...
	I0918 19:36:08.223411  771154 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 19:36:08.223421  771154 out.go:309] Setting ErrFile to fd 2...
	I0918 19:36:08.223427  771154 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 19:36:08.223697  771154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17263-642665/.minikube/bin
	I0918 19:36:08.224114  771154 out.go:303] Setting JSON to false
	I0918 19:36:08.225132  771154 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11914,"bootTime":1695053855,"procs":256,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0918 19:36:08.225209  771154 start.go:138] virtualization:  
	I0918 19:36:08.227937  771154 out.go:177] * [stopped-upgrade-311194] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0918 19:36:08.230754  771154 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 19:36:08.230955  771154 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I0918 19:36:08.231001  771154 notify.go:220] Checking for updates...
	I0918 19:36:08.235813  771154 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:36:08.237950  771154 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 19:36:08.239935  771154 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	I0918 19:36:08.241666  771154 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 19:36:08.243511  771154 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:36:08.245858  771154 config.go:182] Loaded profile config "stopped-upgrade-311194": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0918 19:36:08.248426  771154 out.go:177] * Kubernetes 1.28.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.2
	I0918 19:36:08.250438  771154 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 19:36:08.285074  771154 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0918 19:36:08.285193  771154 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:36:08.369272  771154 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I0918 19:36:08.388614  771154 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2023-09-18 19:36:08.37908625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 19:36:08.388718  771154 docker.go:294] overlay module found
	I0918 19:36:08.392849  771154 out.go:177] * Using the docker driver based on existing profile
	I0918 19:36:08.395051  771154 start.go:298] selected driver: docker
	I0918 19:36:08.395069  771154 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-311194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-311194 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.155 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0918 19:36:08.395216  771154 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:36:08.395870  771154 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:36:08.465548  771154 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:45 SystemTime:2023-09-18 19:36:08.456146677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 19:36:08.465847  771154 cni.go:84] Creating CNI manager for ""
	I0918 19:36:08.465866  771154 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0918 19:36:08.465879  771154 start_flags.go:321] config:
	{Name:stopped-upgrade-311194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-311194 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.155 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0918 19:36:08.468120  771154 out.go:177] * Starting control plane node stopped-upgrade-311194 in cluster stopped-upgrade-311194
	I0918 19:36:08.470125  771154 cache.go:122] Beginning downloading kic base image for docker with crio
	I0918 19:36:08.471910  771154 out.go:177] * Pulling base image ...
	I0918 19:36:08.473549  771154 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I0918 19:36:08.473679  771154 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I0918 19:36:08.493755  771154 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I0918 19:36:08.493786  771154 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W0918 19:36:08.552104  771154 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I0918 19:36:08.552262  771154 profile.go:148] Saving config to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/stopped-upgrade-311194/config.json ...
	I0918 19:36:08.552338  771154 cache.go:107] acquiring lock: {Name:mk0448c5eaa4bfb73be4e8a89c51ab7eac017bb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:36:08.552432  771154 cache.go:115] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0918 19:36:08.552441  771154 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.458µs
	I0918 19:36:08.552465  771154 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0918 19:36:08.552474  771154 cache.go:107] acquiring lock: {Name:mkb1bcf2c79ad5dd72a21a8060abf258158deb3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:36:08.552505  771154 cache.go:115] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I0918 19:36:08.552510  771154 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 38.269µs
	I0918 19:36:08.552517  771154 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I0918 19:36:08.552523  771154 cache.go:195] Successfully downloaded all kic artifacts
	I0918 19:36:08.552523  771154 cache.go:107] acquiring lock: {Name:mk94436bfb3f6c5c3b54b56ba14852d3d7287ff8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:36:08.552550  771154 cache.go:115] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I0918 19:36:08.552555  771154 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 32.583µs
	I0918 19:36:08.552561  771154 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I0918 19:36:08.552546  771154 start.go:365] acquiring machines lock for stopped-upgrade-311194: {Name:mk83fcd0fc3464d857c8e4c54eef92bfeca61f6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:36:08.552571  771154 cache.go:107] acquiring lock: {Name:mkae3ea1402778dbccc66e56bdf38eb58069c2ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:36:08.552600  771154 cache.go:115] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I0918 19:36:08.552599  771154 start.go:369] acquired machines lock for "stopped-upgrade-311194" in 27.718µs
	I0918 19:36:08.552604  771154 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 34.092µs
	I0918 19:36:08.552616  771154 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I0918 19:36:08.552626  771154 cache.go:107] acquiring lock: {Name:mkda06880cf9e7b8736499fc8bc2f400a2112f1f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:36:08.552652  771154 cache.go:115] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I0918 19:36:08.552656  771154 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 31.393µs
	I0918 19:36:08.552664  771154 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I0918 19:36:08.552673  771154 cache.go:107] acquiring lock: {Name:mk8176c8ba014cd239d9c555b9081841f8d76309 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:36:08.552696  771154 cache.go:115] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I0918 19:36:08.552701  771154 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 28.701µs
	I0918 19:36:08.552707  771154 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I0918 19:36:08.552717  771154 cache.go:107] acquiring lock: {Name:mkba817b4e85381a0702f3b894daea208c197c81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:36:08.552740  771154 start.go:96] Skipping create...Using existing machine configuration
	I0918 19:36:08.552748  771154 fix.go:54] fixHost starting: 
	I0918 19:36:08.552750  771154 cache.go:107] acquiring lock: {Name:mk1f57987a30fd148d99b1a6ecc02f44f5054f82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0918 19:36:08.552776  771154 cache.go:115] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I0918 19:36:08.552781  771154 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 33.01µs
	I0918 19:36:08.552788  771154 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I0918 19:36:08.552741  771154 cache.go:115] /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I0918 19:36:08.552897  771154 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 181.728µs
	I0918 19:36:08.552907  771154 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I0918 19:36:08.552914  771154 cache.go:87] Successfully saved all images to host disk.
	I0918 19:36:08.553013  771154 cli_runner.go:164] Run: docker container inspect stopped-upgrade-311194 --format={{.State.Status}}
	I0918 19:36:08.575621  771154 fix.go:102] recreateIfNeeded on stopped-upgrade-311194: state=Stopped err=<nil>
	W0918 19:36:08.575663  771154 fix.go:128] unexpected machine state, will restart: <nil>
	I0918 19:36:08.578082  771154 out.go:177] * Restarting existing docker container for "stopped-upgrade-311194" ...
	I0918 19:36:08.580031  771154 cli_runner.go:164] Run: docker start stopped-upgrade-311194
	I0918 19:36:08.913290  771154 cli_runner.go:164] Run: docker container inspect stopped-upgrade-311194 --format={{.State.Status}}
	I0918 19:36:08.942849  771154 kic.go:426] container "stopped-upgrade-311194" state is running.
	I0918 19:36:08.943261  771154 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-311194
	I0918 19:36:08.968188  771154 profile.go:148] Saving config to /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/stopped-upgrade-311194/config.json ...
	I0918 19:36:08.968432  771154 machine.go:88] provisioning docker machine ...
	I0918 19:36:08.968454  771154 ubuntu.go:169] provisioning hostname "stopped-upgrade-311194"
	I0918 19:36:08.968510  771154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-311194
	I0918 19:36:08.998864  771154 main.go:141] libmachine: Using SSH client type: native
	I0918 19:36:08.999313  771154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33602 <nil> <nil>}
	I0918 19:36:08.999326  771154 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-311194 && echo "stopped-upgrade-311194" | sudo tee /etc/hostname
	I0918 19:36:09.000052  771154 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0918 19:36:12.169017  771154 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-311194
	
	I0918 19:36:12.169099  771154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-311194
	I0918 19:36:12.189436  771154 main.go:141] libmachine: Using SSH client type: native
	I0918 19:36:12.189851  771154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33602 <nil> <nil>}
	I0918 19:36:12.189876  771154 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-311194' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-311194/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-311194' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0918 19:36:12.336933  771154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0918 19:36:12.336958  771154 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17263-642665/.minikube CaCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17263-642665/.minikube}
	I0918 19:36:12.336989  771154 ubuntu.go:177] setting up certificates
	I0918 19:36:12.336999  771154 provision.go:83] configureAuth start
	I0918 19:36:12.337063  771154 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-311194
	I0918 19:36:12.355762  771154 provision.go:138] copyHostCerts
	I0918 19:36:12.355912  771154 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem, removing ...
	I0918 19:36:12.355940  771154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem
	I0918 19:36:12.356021  771154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/ca.pem (1082 bytes)
	I0918 19:36:12.356167  771154 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem, removing ...
	I0918 19:36:12.356178  771154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem
	I0918 19:36:12.356205  771154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/cert.pem (1123 bytes)
	I0918 19:36:12.356264  771154 exec_runner.go:144] found /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem, removing ...
	I0918 19:36:12.356273  771154 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem
	I0918 19:36:12.356297  771154 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17263-642665/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17263-642665/.minikube/key.pem (1675 bytes)
	I0918 19:36:12.356343  771154 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-311194 san=[192.168.59.155 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-311194]
	I0918 19:36:13.180012  771154 provision.go:172] copyRemoteCerts
	I0918 19:36:13.180088  771154 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0918 19:36:13.180146  771154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-311194
	I0918 19:36:13.198155  771154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33602 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/stopped-upgrade-311194/id_rsa Username:docker}
	I0918 19:36:13.296811  771154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0918 19:36:13.321246  771154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0918 19:36:13.344662  771154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0918 19:36:13.367826  771154 provision.go:86] duration metric: configureAuth took 1.030811317s
	I0918 19:36:13.367855  771154 ubuntu.go:193] setting minikube options for container-runtime
	I0918 19:36:13.368084  771154 config.go:182] Loaded profile config "stopped-upgrade-311194": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I0918 19:36:13.368229  771154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-311194
	I0918 19:36:13.388305  771154 main.go:141] libmachine: Using SSH client type: native
	I0918 19:36:13.388725  771154 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ac3c0] 0x3aeb30 <nil>  [] 0s} 127.0.0.1 33602 <nil> <nil>}
	I0918 19:36:13.388745  771154 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0918 19:36:13.827863  771154 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0918 19:36:13.827886  771154 machine.go:91] provisioned docker machine in 4.859437144s
	I0918 19:36:13.827896  771154 start.go:300] post-start starting for "stopped-upgrade-311194" (driver="docker")
	I0918 19:36:13.827908  771154 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0918 19:36:13.827976  771154 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0918 19:36:13.828016  771154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-311194
	I0918 19:36:13.849212  771154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33602 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/stopped-upgrade-311194/id_rsa Username:docker}
	I0918 19:36:13.953329  771154 ssh_runner.go:195] Run: cat /etc/os-release
	I0918 19:36:13.957406  771154 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0918 19:36:13.957433  771154 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0918 19:36:13.957444  771154 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0918 19:36:13.957451  771154 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I0918 19:36:13.957461  771154 filesync.go:126] Scanning /home/jenkins/minikube-integration/17263-642665/.minikube/addons for local assets ...
	I0918 19:36:13.957520  771154 filesync.go:126] Scanning /home/jenkins/minikube-integration/17263-642665/.minikube/files for local assets ...
	I0918 19:36:13.957607  771154 filesync.go:149] local asset: /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem -> 6480032.pem in /etc/ssl/certs
	I0918 19:36:13.957720  771154 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0918 19:36:13.966714  771154 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/ssl/certs/6480032.pem --> /etc/ssl/certs/6480032.pem (1708 bytes)
	I0918 19:36:13.990420  771154 start.go:303] post-start completed in 162.506306ms
	I0918 19:36:13.990518  771154 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 19:36:13.990577  771154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-311194
	I0918 19:36:14.016689  771154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33602 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/stopped-upgrade-311194/id_rsa Username:docker}
	I0918 19:36:14.120352  771154 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0918 19:36:14.127037  771154 fix.go:56] fixHost completed within 5.574279666s
	I0918 19:36:14.127059  771154 start.go:83] releasing machines lock for "stopped-upgrade-311194", held for 5.574451524s
	I0918 19:36:14.127140  771154 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-311194
	I0918 19:36:14.154818  771154 ssh_runner.go:195] Run: cat /version.json
	I0918 19:36:14.154871  771154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-311194
	I0918 19:36:14.155903  771154 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0918 19:36:14.155969  771154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-311194
	I0918 19:36:14.187855  771154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33602 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/stopped-upgrade-311194/id_rsa Username:docker}
	I0918 19:36:14.201651  771154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33602 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/stopped-upgrade-311194/id_rsa Username:docker}
	W0918 19:36:14.296447  771154 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0918 19:36:14.296553  771154 ssh_runner.go:195] Run: systemctl --version
	I0918 19:36:14.374243  771154 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0918 19:36:14.494498  771154 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0918 19:36:14.502247  771154 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 19:36:14.529761  771154 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0918 19:36:14.529852  771154 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0918 19:36:14.569462  771154 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0918 19:36:14.569489  771154 start.go:469] detecting cgroup driver to use...
	I0918 19:36:14.569523  771154 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0918 19:36:14.569582  771154 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0918 19:36:14.606184  771154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0918 19:36:14.620183  771154 docker.go:196] disabling cri-docker service (if available) ...
	I0918 19:36:14.620261  771154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0918 19:36:14.634603  771154 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0918 19:36:14.657474  771154 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0918 19:36:14.684739  771154 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0918 19:36:14.684834  771154 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0918 19:36:14.815118  771154 docker.go:212] disabling docker service ...
	I0918 19:36:14.815193  771154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0918 19:36:14.832788  771154 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0918 19:36:14.847411  771154 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0918 19:36:14.989847  771154 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0918 19:36:15.156627  771154 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0918 19:36:15.171983  771154 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0918 19:36:15.194800  771154 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0918 19:36:15.194872  771154 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0918 19:36:15.212221  771154 out.go:177] 
	W0918 19:36:15.214143  771154 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0918 19:36:15.214166  771154 out.go:239] * 
	* 
	W0918 19:36:15.215267  771154 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0918 19:36:15.217373  771154 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-311194 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (78.35s)

                                                
                                    

Test pass (268/304)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 12.41
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.2/json-events 13.6
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.23
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.15
19 TestBinaryMirror 0.63
22 TestAddons/Setup 166.39
24 TestAddons/parallel/Registry 17.01
26 TestAddons/parallel/InspektorGadget 10.81
27 TestAddons/parallel/MetricsServer 6.02
30 TestAddons/parallel/CSI 44.45
32 TestAddons/parallel/CloudSpanner 5.81
35 TestAddons/serial/GCPAuth/Namespaces 0.19
36 TestAddons/StoppedEnableDisable 12.39
37 TestCertOptions 38.97
38 TestCertExpiration 259.6
40 TestForceSystemdFlag 37.69
41 TestForceSystemdEnv 48.14
47 TestErrorSpam/setup 30.84
48 TestErrorSpam/start 0.84
49 TestErrorSpam/status 1.17
50 TestErrorSpam/pause 1.92
51 TestErrorSpam/unpause 1.99
52 TestErrorSpam/stop 1.44
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 76.99
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 40.28
59 TestFunctional/serial/KubeContext 0.07
60 TestFunctional/serial/KubectlGetPods 0.1
63 TestFunctional/serial/CacheCmd/cache/add_remote 4.15
64 TestFunctional/serial/CacheCmd/cache/add_local 1.2
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
66 TestFunctional/serial/CacheCmd/cache/list 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.36
68 TestFunctional/serial/CacheCmd/cache/cache_reload 2.09
69 TestFunctional/serial/CacheCmd/cache/delete 0.15
70 TestFunctional/serial/MinikubeKubectlCmd 0.16
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
72 TestFunctional/serial/ExtraConfig 33.64
73 TestFunctional/serial/ComponentHealth 0.1
74 TestFunctional/serial/LogsCmd 1.97
75 TestFunctional/serial/LogsFileCmd 1.92
76 TestFunctional/serial/InvalidService 4.15
78 TestFunctional/parallel/ConfigCmd 0.52
79 TestFunctional/parallel/DashboardCmd 10.06
80 TestFunctional/parallel/DryRun 0.52
81 TestFunctional/parallel/InternationalLanguage 0.21
82 TestFunctional/parallel/StatusCmd 1.3
86 TestFunctional/parallel/ServiceCmdConnect 12.75
87 TestFunctional/parallel/AddonsCmd 0.2
88 TestFunctional/parallel/PersistentVolumeClaim 26.08
90 TestFunctional/parallel/SSHCmd 0.74
91 TestFunctional/parallel/CpCmd 1.6
93 TestFunctional/parallel/FileSync 0.44
94 TestFunctional/parallel/CertSync 2.52
98 TestFunctional/parallel/NodeLabels 0.11
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.86
102 TestFunctional/parallel/License 0.32
104 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.69
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.44
108 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
109 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
113 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
114 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
116 TestFunctional/parallel/ProfileCmd/profile_list 0.43
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
118 TestFunctional/parallel/MountCmd/any-port 7.79
119 TestFunctional/parallel/ServiceCmd/List 0.69
120 TestFunctional/parallel/ServiceCmd/JSONOutput 0.63
121 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
122 TestFunctional/parallel/ServiceCmd/Format 0.65
123 TestFunctional/parallel/ServiceCmd/URL 0.48
124 TestFunctional/parallel/MountCmd/specific-port 1.66
125 TestFunctional/parallel/MountCmd/VerifyCleanup 2.25
126 TestFunctional/parallel/Version/short 0.06
127 TestFunctional/parallel/Version/components 1.07
128 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
129 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
130 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
131 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
132 TestFunctional/parallel/ImageCommands/ImageBuild 3.38
133 TestFunctional/parallel/ImageCommands/Setup 2.61
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.99
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.29
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.26
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.02
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.84
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.94
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.4
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.98
144 TestFunctional/delete_addon-resizer_images 0.1
145 TestFunctional/delete_my-image_image 0.02
146 TestFunctional/delete_minikube_cached_images 0.02
150 TestIngressAddonLegacy/StartLegacyK8sCluster 103.06
152 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.48
153 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.69
157 TestJSONOutput/start/Command 73.98
158 TestJSONOutput/start/Audit 0
160 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/pause/Command 0.84
164 TestJSONOutput/pause/Audit 0
166 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/unpause/Command 0.74
170 TestJSONOutput/unpause/Audit 0
172 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/stop/Command 5.92
176 TestJSONOutput/stop/Audit 0
178 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
180 TestErrorJSONOutput 0.23
182 TestKicCustomNetwork/create_custom_network 45.58
183 TestKicCustomNetwork/use_default_bridge_network 35.39
184 TestKicExistingNetwork 34.17
185 TestKicCustomSubnet 37.22
186 TestKicStaticIP 36.9
187 TestMainNoArgs 0.06
188 TestMinikubeProfile 74.6
191 TestMountStart/serial/StartWithMountFirst 7.04
192 TestMountStart/serial/VerifyMountFirst 0.29
193 TestMountStart/serial/StartWithMountSecond 10.06
194 TestMountStart/serial/VerifyMountSecond 0.28
195 TestMountStart/serial/DeleteFirst 1.69
196 TestMountStart/serial/VerifyMountPostDelete 0.29
197 TestMountStart/serial/Stop 1.24
198 TestMountStart/serial/RestartStopped 8.77
199 TestMountStart/serial/VerifyMountPostStop 0.28
202 TestMultiNode/serial/FreshStart2Nodes 131.39
203 TestMultiNode/serial/DeployApp2Nodes 6.18
205 TestMultiNode/serial/AddNode 50.56
206 TestMultiNode/serial/ProfileList 0.36
207 TestMultiNode/serial/CopyFile 11.08
208 TestMultiNode/serial/StopNode 2.42
209 TestMultiNode/serial/StartAfterStop 13.48
210 TestMultiNode/serial/RestartKeepsNodes 121.97
211 TestMultiNode/serial/DeleteNode 5.24
212 TestMultiNode/serial/StopMultiNode 24.16
213 TestMultiNode/serial/RestartMultiNode 82.27
214 TestMultiNode/serial/ValidateNameConflict 35.95
219 TestPreload 172.07
221 TestScheduledStopUnix 106.75
224 TestInsufficientStorage 11.32
227 TestKubernetesUpgrade 397.19
231 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
232 TestPause/serial/Start 90.39
233 TestNoKubernetes/serial/StartWithK8s 43.71
234 TestNoKubernetes/serial/StartWithStopK8s 7.14
235 TestNoKubernetes/serial/Start 9.38
236 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
237 TestNoKubernetes/serial/ProfileList 0.99
238 TestNoKubernetes/serial/Stop 1.24
239 TestNoKubernetes/serial/StartNoArgs 7.64
240 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
241 TestPause/serial/SecondStartNoReconfiguration 30.17
242 TestPause/serial/Pause 1.16
243 TestPause/serial/VerifyStatus 0.43
244 TestPause/serial/Unpause 1.1
245 TestPause/serial/PauseAgain 1.6
246 TestPause/serial/DeletePaused 3.54
247 TestPause/serial/VerifyDeletedResources 0.43
248 TestStoppedBinaryUpgrade/Setup 1.23
250 TestStoppedBinaryUpgrade/MinikubeLogs 0.68
265 TestNetworkPlugins/group/false 3.89
270 TestStartStop/group/old-k8s-version/serial/FirstStart 122.29
271 TestStartStop/group/old-k8s-version/serial/DeployApp 10.61
272 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.01
273 TestStartStop/group/old-k8s-version/serial/Stop 12.13
274 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
275 TestStartStop/group/old-k8s-version/serial/SecondStart 436.36
277 TestStartStop/group/no-preload/serial/FirstStart 63.84
278 TestStartStop/group/no-preload/serial/DeployApp 9.51
279 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.22
280 TestStartStop/group/no-preload/serial/Stop 12.18
281 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
282 TestStartStop/group/no-preload/serial/SecondStart 347.87
283 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
284 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.14
285 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.42
286 TestStartStop/group/old-k8s-version/serial/Pause 4.68
288 TestStartStop/group/embed-certs/serial/FirstStart 83.2
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 11.06
290 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
291 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.39
292 TestStartStop/group/no-preload/serial/Pause 3.86
294 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.42
295 TestStartStop/group/embed-certs/serial/DeployApp 9.6
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.28
297 TestStartStop/group/embed-certs/serial/Stop 12.13
298 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
299 TestStartStop/group/embed-certs/serial/SecondStart 347.68
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.71
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.63
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.12
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
304 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 635.65
305 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11.05
306 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
307 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.38
308 TestStartStop/group/embed-certs/serial/Pause 3.9
310 TestStartStop/group/newest-cni/serial/FirstStart 48.76
311 TestStartStop/group/newest-cni/serial/DeployApp 0
312 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.34
313 TestStartStop/group/newest-cni/serial/Stop 2.02
314 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
315 TestStartStop/group/newest-cni/serial/SecondStart 30.96
316 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
317 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
318 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.36
319 TestStartStop/group/newest-cni/serial/Pause 3.47
320 TestNetworkPlugins/group/auto/Start 53.35
321 TestNetworkPlugins/group/auto/KubeletFlags 0.33
322 TestNetworkPlugins/group/auto/NetCatPod 11.38
323 TestNetworkPlugins/group/auto/DNS 0.26
324 TestNetworkPlugins/group/auto/Localhost 0.21
325 TestNetworkPlugins/group/auto/HairPin 0.22
326 TestNetworkPlugins/group/kindnet/Start 81.89
327 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
328 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
329 TestNetworkPlugins/group/kindnet/NetCatPod 11.33
330 TestNetworkPlugins/group/kindnet/DNS 0.2
331 TestNetworkPlugins/group/kindnet/Localhost 0.18
332 TestNetworkPlugins/group/kindnet/HairPin 0.19
333 TestNetworkPlugins/group/calico/Start 81.98
334 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.05
335 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.14
336 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.5
337 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.57
338 TestNetworkPlugins/group/custom-flannel/Start 74.04
339 TestNetworkPlugins/group/calico/ControllerPod 5.05
340 TestNetworkPlugins/group/calico/KubeletFlags 0.33
341 TestNetworkPlugins/group/calico/NetCatPod 10.44
342 TestNetworkPlugins/group/calico/DNS 0.25
343 TestNetworkPlugins/group/calico/Localhost 0.23
344 TestNetworkPlugins/group/calico/HairPin 0.19
345 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
346 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.34
347 TestNetworkPlugins/group/custom-flannel/DNS 0.37
348 TestNetworkPlugins/group/custom-flannel/Localhost 0.28
349 TestNetworkPlugins/group/custom-flannel/HairPin 0.25
350 TestNetworkPlugins/group/enable-default-cni/Start 48.36
351 TestNetworkPlugins/group/flannel/Start 74.24
352 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.45
353 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.55
354 TestNetworkPlugins/group/enable-default-cni/DNS 26.25
355 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
356 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
357 TestNetworkPlugins/group/flannel/ControllerPod 5.04
358 TestNetworkPlugins/group/flannel/KubeletFlags 0.41
359 TestNetworkPlugins/group/flannel/NetCatPod 11.5
360 TestNetworkPlugins/group/flannel/DNS 0.28
361 TestNetworkPlugins/group/flannel/Localhost 0.27
362 TestNetworkPlugins/group/flannel/HairPin 0.25
363 TestNetworkPlugins/group/bridge/Start 76.23
364 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
365 TestNetworkPlugins/group/bridge/NetCatPod 10.34
366 TestNetworkPlugins/group/bridge/DNS 0.22
367 TestNetworkPlugins/group/bridge/Localhost 0.22
368 TestNetworkPlugins/group/bridge/HairPin 0.18
x
+
TestDownloadOnly/v1.16.0/json-events (12.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-623514 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-623514 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.411429773s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (12.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-623514
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-623514: exit status 85 (72.45857ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-623514 | jenkins | v1.31.2 | 18 Sep 23 18:54 UTC |          |
	|         | -p download-only-623514        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/18 18:54:46
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 18:54:46.156541  648008 out.go:296] Setting OutFile to fd 1 ...
	I0918 18:54:46.156737  648008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 18:54:46.156765  648008 out.go:309] Setting ErrFile to fd 2...
	I0918 18:54:46.156785  648008 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 18:54:46.157054  648008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17263-642665/.minikube/bin
	W0918 18:54:46.157233  648008 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17263-642665/.minikube/config/config.json: open /home/jenkins/minikube-integration/17263-642665/.minikube/config/config.json: no such file or directory
	I0918 18:54:46.157682  648008 out.go:303] Setting JSON to true
	I0918 18:54:46.158757  648008 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9432,"bootTime":1695053855,"procs":348,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0918 18:54:46.158861  648008 start.go:138] virtualization:  
	I0918 18:54:46.162114  648008 out.go:97] [download-only-623514] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0918 18:54:46.164490  648008 out.go:169] MINIKUBE_LOCATION=17263
	W0918 18:54:46.162459  648008 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball: no such file or directory
	I0918 18:54:46.162535  648008 notify.go:220] Checking for updates...
	I0918 18:54:46.169184  648008 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 18:54:46.171358  648008 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 18:54:46.173496  648008 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	I0918 18:54:46.175846  648008 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0918 18:54:46.180005  648008 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0918 18:54:46.180327  648008 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 18:54:46.204488  648008 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0918 18:54:46.204583  648008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 18:54:46.296719  648008 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-09-18 18:54:46.285640564 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 18:54:46.296826  648008 docker.go:294] overlay module found
	I0918 18:54:46.298972  648008 out.go:97] Using the docker driver based on user configuration
	I0918 18:54:46.299003  648008 start.go:298] selected driver: docker
	I0918 18:54:46.299010  648008 start.go:902] validating driver "docker" against <nil>
	I0918 18:54:46.299125  648008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 18:54:46.367652  648008 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-09-18 18:54:46.358229966 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 18:54:46.367878  648008 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0918 18:54:46.368155  648008 start_flags.go:384] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0918 18:54:46.368311  648008 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0918 18:54:46.370293  648008 out.go:169] Using Docker driver with root privileges
	I0918 18:54:46.372569  648008 cni.go:84] Creating CNI manager for ""
	I0918 18:54:46.372589  648008 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0918 18:54:46.372606  648008 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0918 18:54:46.372617  648008 start_flags.go:321] config:
	{Name:download-only-623514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-623514 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 18:54:46.374860  648008 out.go:97] Starting control plane node download-only-623514 in cluster download-only-623514
	I0918 18:54:46.374881  648008 cache.go:122] Beginning downloading kic base image for docker with crio
	I0918 18:54:46.376686  648008 out.go:97] Pulling base image ...
	I0918 18:54:46.376734  648008 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0918 18:54:46.376836  648008 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I0918 18:54:46.394141  648008 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I0918 18:54:46.394739  648008 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory
	I0918 18:54:46.394849  648008 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I0918 18:54:46.455945  648008 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0918 18:54:46.455972  648008 cache.go:57] Caching tarball of preloaded images
	I0918 18:54:46.456110  648008 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0918 18:54:46.458529  648008 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0918 18:54:46.458563  648008 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I0918 18:54:46.579031  648008 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I0918 18:54:51.044561  648008 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 as a tarball
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-623514"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (13.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-623514 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-623514 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.604613201s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (13.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-623514
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-623514: exit status 85 (76.314198ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-623514 | jenkins | v1.31.2 | 18 Sep 23 18:54 UTC |          |
	|         | -p download-only-623514        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-623514 | jenkins | v1.31.2 | 18 Sep 23 18:54 UTC |          |
	|         | -p download-only-623514        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/18 18:54:58
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.21.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0918 18:54:58.641711  648082 out.go:296] Setting OutFile to fd 1 ...
	I0918 18:54:58.641886  648082 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 18:54:58.641897  648082 out.go:309] Setting ErrFile to fd 2...
	I0918 18:54:58.641903  648082 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 18:54:58.642150  648082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17263-642665/.minikube/bin
	W0918 18:54:58.642264  648082 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17263-642665/.minikube/config/config.json: open /home/jenkins/minikube-integration/17263-642665/.minikube/config/config.json: no such file or directory
	I0918 18:54:58.642526  648082 out.go:303] Setting JSON to true
	I0918 18:54:58.643577  648082 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":9444,"bootTime":1695053855,"procs":344,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0918 18:54:58.643646  648082 start.go:138] virtualization:  
	I0918 18:54:58.646008  648082 out.go:97] [download-only-623514] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0918 18:54:58.648119  648082 out.go:169] MINIKUBE_LOCATION=17263
	I0918 18:54:58.646299  648082 notify.go:220] Checking for updates...
	I0918 18:54:58.652864  648082 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 18:54:58.654998  648082 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 18:54:58.656989  648082 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	I0918 18:54:58.658993  648082 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0918 18:54:58.663063  648082 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0918 18:54:58.663562  648082 config.go:182] Loaded profile config "download-only-623514": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W0918 18:54:58.663636  648082 start.go:810] api.Load failed for download-only-623514: filestore "download-only-623514": Docker machine "download-only-623514" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0918 18:54:58.663738  648082 driver.go:373] Setting default libvirt URI to qemu:///system
	W0918 18:54:58.663766  648082 start.go:810] api.Load failed for download-only-623514: filestore "download-only-623514": Docker machine "download-only-623514" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0918 18:54:58.688098  648082 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0918 18:54:58.688202  648082 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 18:54:58.782283  648082 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-09-18 18:54:58.770434152 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 18:54:58.782403  648082 docker.go:294] overlay module found
	I0918 18:54:58.784451  648082 out.go:97] Using the docker driver based on existing profile
	I0918 18:54:58.784481  648082 start.go:298] selected driver: docker
	I0918 18:54:58.784489  648082 start.go:902] validating driver "docker" against &{Name:download-only-623514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-623514 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 18:54:58.784714  648082 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 18:54:58.850753  648082 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-09-18 18:54:58.841455044 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 18:54:58.851194  648082 cni.go:84] Creating CNI manager for ""
	I0918 18:54:58.851211  648082 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0918 18:54:58.851224  648082 start_flags.go:321] config:
	{Name:download-only-623514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-623514 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 18:54:58.853377  648082 out.go:97] Starting control plane node download-only-623514 in cluster download-only-623514
	I0918 18:54:58.853407  648082 cache.go:122] Beginning downloading kic base image for docker with crio
	I0918 18:54:58.855452  648082 out.go:97] Pulling base image ...
	I0918 18:54:58.855505  648082 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0918 18:54:58.855666  648082 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local docker daemon
	I0918 18:54:58.873480  648082 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 to local cache
	I0918 18:54:58.873631  648082 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory
	I0918 18:54:58.873657  648082 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 in local cache directory, skipping pull
	I0918 18:54:58.873665  648082 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 exists in cache, skipping pull
	I0918 18:54:58.873673  648082 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 as a tarball
	I0918 18:54:58.916831  648082 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	I0918 18:54:58.916859  648082 cache.go:57] Caching tarball of preloaded images
	I0918 18:54:58.917514  648082 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime crio
	I0918 18:54:58.919835  648082 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I0918 18:54:58.919856  648082 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4 ...
	I0918 18:54:59.042942  648082 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:ec283948b04358f92432bdd325b7fb0b -> /home/jenkins/minikube-integration/17263-642665/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-623514"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-623514
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-476016 --alsologtostderr --binary-mirror http://127.0.0.1:35741 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-476016" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-476016
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/Setup (166.39s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-arm64 start -p addons-351470 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-linux-arm64 start -p addons-351470 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m46.39279611s)
--- PASS: TestAddons/Setup (166.39s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 57.668279ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9gb28" [527d0996-363b-4641-aba2-49d6b29da00c] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014577691s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-gzc8v" [b1fe082f-9b6f-41d3-964b-615c0229250d] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.016315525s
addons_test.go:316: (dbg) Run:  kubectl --context addons-351470 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-351470 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-351470 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.581767717s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-arm64 -p addons-351470 ip
2023/09/18 18:58:16 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p addons-351470 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.01s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-sj9pr" [5564fe9d-1dfd-4d60-b65b-9793a0cddc50] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.016236583s
addons_test.go:817: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-351470
addons_test.go:817: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-351470: (5.794194595s)
--- PASS: TestAddons/parallel/InspektorGadget (10.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.02s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 10.973518ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-z9mjl" [5d85482f-b583-40c4-b7e9-0174b3dedab1] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.015967761s
addons_test.go:391: (dbg) Run:  kubectl --context addons-351470 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p addons-351470 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.02s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 5.323394ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-351470 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351470 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351470 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351470 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-351470 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9922faa8-f412-4201-a9c9-defdc2ae34b4] Pending
helpers_test.go:344: "task-pv-pod" [9922faa8-f412-4201-a9c9-defdc2ae34b4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9922faa8-f412-4201-a9c9-defdc2ae34b4] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.021640741s
addons_test.go:560: (dbg) Run:  kubectl --context addons-351470 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-351470 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-351470 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-351470 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-351470 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-351470 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-351470 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351470 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351470 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351470 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351470 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351470 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351470 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351470 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351470 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351470 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351470 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351470 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-351470 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-351470 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1830faa5-144b-40d1-9ba1-ea16f53021e5] Pending
helpers_test.go:344: "task-pv-pod-restore" [1830faa5-144b-40d1-9ba1-ea16f53021e5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1830faa5-144b-40d1-9ba1-ea16f53021e5] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.015836289s
addons_test.go:602: (dbg) Run:  kubectl --context addons-351470 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-351470 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-351470 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-arm64 -p addons-351470 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-arm64 -p addons-351470 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.823153926s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-arm64 -p addons-351470 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (44.45s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.81s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-2wm2h" [4757cd07-5aa4-4fb4-b4be-af4087e07f4f] Running / Ready:ContainersNotReady (containers with unready status: [cloud-spanner-emulator]) / ContainersReady:ContainersNotReady (containers with unready status: [cloud-spanner-emulator])
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.017849676s
addons_test.go:836: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-351470
--- PASS: TestAddons/parallel/CloudSpanner (5.81s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-351470 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-351470 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-351470
addons_test.go:148: (dbg) Done: out/minikube-linux-arm64 stop -p addons-351470: (12.102380113s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-351470
addons_test.go:156: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-351470
addons_test.go:161: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-351470
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestCertOptions (38.97s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-423025 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-423025 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.123935296s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-423025 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-423025 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-423025 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-423025" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-423025
E0918 19:40:14.595735  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-423025: (2.097472848s)
--- PASS: TestCertOptions (38.97s)

                                                
                                    
x
+
TestCertExpiration (259.6s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-982247 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-982247 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.863441968s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-982247 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
E0918 19:43:00.487253  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:43:17.644567  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
E0918 19:43:19.084046  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-982247 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (36.838385197s)
helpers_test.go:175: Cleaning up "cert-expiration-982247" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-982247
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-982247: (2.900079095s)
--- PASS: TestCertExpiration (259.60s)

                                                
                                    
x
+
TestForceSystemdFlag (37.69s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-039627 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0918 19:38:00.487143  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-039627 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.881535801s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-039627 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-039627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-039627
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-039627: (2.487695228s)
--- PASS: TestForceSystemdFlag (37.69s)

                                                
                                    
x
+
TestForceSystemdEnv (48.14s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-836136 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0918 19:38:19.084304  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-836136 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (45.210464892s)
helpers_test.go:175: Cleaning up "force-systemd-env-836136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-836136
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-836136: (2.931445491s)
--- PASS: TestForceSystemdEnv (48.14s)

                                                
                                    
x
+
TestErrorSpam/setup (30.84s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-641579 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-641579 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-641579 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-641579 --driver=docker  --container-runtime=crio: (30.843093227s)
--- PASS: TestErrorSpam/setup (30.84s)

                                                
                                    
x
+
TestErrorSpam/start (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-641579 --log_dir /tmp/nospam-641579 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-641579 --log_dir /tmp/nospam-641579 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-641579 --log_dir /tmp/nospam-641579 start --dry-run
--- PASS: TestErrorSpam/start (0.84s)

                                                
                                    
x
+
TestErrorSpam/status (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-641579 --log_dir /tmp/nospam-641579 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-641579 --log_dir /tmp/nospam-641579 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-641579 --log_dir /tmp/nospam-641579 status
--- PASS: TestErrorSpam/status (1.17s)

                                                
                                    
x
+
TestErrorSpam/pause (1.92s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-641579 --log_dir /tmp/nospam-641579 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-641579 --log_dir /tmp/nospam-641579 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-641579 --log_dir /tmp/nospam-641579 pause
--- PASS: TestErrorSpam/pause (1.92s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.99s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-641579 --log_dir /tmp/nospam-641579 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-641579 --log_dir /tmp/nospam-641579 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-641579 --log_dir /tmp/nospam-641579 unpause
--- PASS: TestErrorSpam/unpause (1.99s)

                                                
                                    
x
+
TestErrorSpam/stop (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-641579 --log_dir /tmp/nospam-641579 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-641579 --log_dir /tmp/nospam-641579 stop: (1.240684158s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-641579 --log_dir /tmp/nospam-641579 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-641579 --log_dir /tmp/nospam-641579 stop
--- PASS: TestErrorSpam/stop (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17263-642665/.minikube/files/etc/test/nested/copy/648003/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.99s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-382151 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0918 19:03:00.487223  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:03:00.492964  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:03:00.503277  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:03:00.523565  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:03:00.563921  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:03:00.644179  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:03:00.804573  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:03:01.124919  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:03:01.765833  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:03:03.046596  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:03:05.607200  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:03:10.727733  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:03:20.968633  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:03:41.448878  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-382151 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m16.987207472s)
--- PASS: TestFunctional/serial/StartWithProxy (76.99s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.28s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-382151 --alsologtostderr -v=8
E0918 19:04:22.409363  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-382151 --alsologtostderr -v=8: (40.276663729s)
functional_test.go:659: soft start took 40.277251458s for "functional-382151" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.28s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-382151 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-382151 cache add registry.k8s.io/pause:3.1: (1.42124357s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-382151 cache add registry.k8s.io/pause:3.3: (1.42192053s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-382151 cache add registry.k8s.io/pause:latest: (1.3095191s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.15s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-382151 /tmp/TestFunctionalserialCacheCmdcacheadd_local4109442960/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 cache add minikube-local-cache-test:functional-382151
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 cache delete minikube-local-cache-test:functional-382151
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-382151
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-382151 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (322.932809ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-382151 cache reload: (1.061664295s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 kubectl -- --context functional-382151 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-382151 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.64s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-382151 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-382151 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.640906389s)
functional_test.go:757: restart took 33.64101624s for "functional-382151" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.64s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-382151 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.97s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-382151 logs: (1.974189702s)
--- PASS: TestFunctional/serial/LogsCmd (1.97s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 logs --file /tmp/TestFunctionalserialLogsFileCmd2595656987/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-382151 logs --file /tmp/TestFunctionalserialLogsFileCmd2595656987/001/logs.txt: (1.915508747s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.92s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.15s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-382151 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-382151
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-382151: exit status 115 (416.64853ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31866 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-382151 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.15s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-382151 config get cpus: exit status 14 (77.638149ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-382151 config get cpus: exit status 14 (75.669399ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-382151 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-382151 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 673025: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.06s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-382151 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-382151 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (231.254844ms)

                                                
                                                
-- stdout --
	* [functional-382151] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 19:05:47.320068  672575 out.go:296] Setting OutFile to fd 1 ...
	I0918 19:05:47.320199  672575 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 19:05:47.320210  672575 out.go:309] Setting ErrFile to fd 2...
	I0918 19:05:47.320216  672575 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 19:05:47.320512  672575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17263-642665/.minikube/bin
	I0918 19:05:47.320924  672575 out.go:303] Setting JSON to false
	I0918 19:05:47.322268  672575 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10093,"bootTime":1695053855,"procs":269,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0918 19:05:47.322354  672575 start.go:138] virtualization:  
	I0918 19:05:47.330952  672575 out.go:177] * [functional-382151] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0918 19:05:47.333493  672575 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 19:05:47.333658  672575 notify.go:220] Checking for updates...
	I0918 19:05:47.338393  672575 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:05:47.340746  672575 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 19:05:47.343094  672575 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	I0918 19:05:47.345885  672575 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 19:05:47.348490  672575 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:05:47.351044  672575 config.go:182] Loaded profile config "functional-382151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0918 19:05:47.351622  672575 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 19:05:47.377632  672575 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0918 19:05:47.377745  672575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:05:47.470747  672575 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:46 SystemTime:2023-09-18 19:05:47.460130037 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 19:05:47.470859  672575 docker.go:294] overlay module found
	I0918 19:05:47.473886  672575 out.go:177] * Using the docker driver based on existing profile
	I0918 19:05:47.475931  672575 start.go:298] selected driver: docker
	I0918 19:05:47.475952  672575 start.go:902] validating driver "docker" against &{Name:functional-382151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-382151 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 19:05:47.476069  672575 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:05:47.479527  672575 out.go:177] 
	W0918 19:05:47.482295  672575 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0918 19:05:47.484802  672575 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-382151 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-382151 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-382151 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (210.878672ms)

                                                
                                                
-- stdout --
	* [functional-382151] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 19:05:47.099143  672536 out.go:296] Setting OutFile to fd 1 ...
	I0918 19:05:47.099327  672536 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 19:05:47.099357  672536 out.go:309] Setting ErrFile to fd 2...
	I0918 19:05:47.099386  672536 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 19:05:47.099833  672536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17263-642665/.minikube/bin
	I0918 19:05:47.100282  672536 out.go:303] Setting JSON to false
	I0918 19:05:47.101344  672536 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10092,"bootTime":1695053855,"procs":269,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0918 19:05:47.101463  672536 start.go:138] virtualization:  
	I0918 19:05:47.104487  672536 out.go:177] * [functional-382151] minikube v1.31.2 sur Ubuntu 20.04 (arm64)
	I0918 19:05:47.107222  672536 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 19:05:47.107430  672536 notify.go:220] Checking for updates...
	I0918 19:05:47.112444  672536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:05:47.114955  672536 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 19:05:47.117454  672536 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	I0918 19:05:47.120475  672536 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 19:05:47.122938  672536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:05:47.125966  672536 config.go:182] Loaded profile config "functional-382151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0918 19:05:47.126561  672536 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 19:05:47.152963  672536 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0918 19:05:47.153071  672536 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:05:47.236344  672536 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:46 SystemTime:2023-09-18 19:05:47.225876456 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 19:05:47.236453  672536 docker.go:294] overlay module found
	I0918 19:05:47.239902  672536 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0918 19:05:47.242627  672536 start.go:298] selected driver: docker
	I0918 19:05:47.242646  672536 start.go:902] validating driver "docker" against &{Name:functional-382151 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694798187-17250@sha256:8d9a070cda8e1b1082ed355bde1aaf66fbf63d64fa6e9f553f449efc74157fe3 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:functional-382151 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0918 19:05:47.242741  672536 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:05:47.247587  672536 out.go:177] 
	W0918 19:05:47.249730  672536 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0918 19:05:47.252323  672536 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-382151 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-382151 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-lmbz9" [f9a5d049-5e91-4f4d-bdf6-123d62e63356] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-lmbz9" [f9a5d049-5e91-4f4d-bdf6-123d62e63356] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.023912844s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30916
functional_test.go:1674: http://192.168.49.2:30916: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-lmbz9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30916
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.75s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [4eb91307-5a74-4995-94c7-23d89ba70ac4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.039559225s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-382151 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-382151 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-382151 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-382151 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [032da981-d764-437a-8cd2-9f5486d1c19f] Pending
helpers_test.go:344: "sp-pod" [032da981-d764-437a-8cd2-9f5486d1c19f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [032da981-d764-437a-8cd2-9f5486d1c19f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.017483282s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-382151 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-382151 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-382151 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4fa0544b-486e-40b1-87c4-27fdd7dc4b27] Pending
helpers_test.go:344: "sp-pod" [4fa0544b-486e-40b1-87c4-27fdd7dc4b27] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4fa0544b-486e-40b1-87c4-27fdd7dc4b27] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.024321195s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-382151 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.08s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh -n functional-382151 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 cp functional-382151:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2340751427/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh -n functional-382151 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/648003/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh "sudo cat /etc/test/nested/copy/648003/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/648003.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh "sudo cat /etc/ssl/certs/648003.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/648003.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh "sudo cat /usr/share/ca-certificates/648003.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/6480032.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh "sudo cat /etc/ssl/certs/6480032.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/6480032.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh "sudo cat /usr/share/ca-certificates/6480032.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.52s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-382151 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-382151 ssh "sudo systemctl is-active docker": exit status 1 (447.581941ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-382151 ssh "sudo systemctl is-active containerd": exit status 1 (412.360581ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-382151 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-382151 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-382151 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-382151 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 670570: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-382151 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-382151 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e773c1f7-f9c3-4077-a49a-126f128bb634] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [e773c1f7-f9c3-4077-a49a-126f128bb634] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.013990282s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-382151 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.81.6 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-382151 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-382151 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-382151 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-mh988" [46cc1452-20fe-498d-bb47-5a5b645df80a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-mh988" [46cc1452-20fe-498d-bb47-5a5b645df80a] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.013304896s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "368.773011ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "59.775534ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "349.567553ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "59.822845ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-382151 /tmp/TestFunctionalparallelMountCmdany-port636457112/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1695063941953668928" to /tmp/TestFunctionalparallelMountCmdany-port636457112/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1695063941953668928" to /tmp/TestFunctionalparallelMountCmdany-port636457112/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1695063941953668928" to /tmp/TestFunctionalparallelMountCmdany-port636457112/001/test-1695063941953668928
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-382151 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (474.014294ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 18 19:05 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 18 19:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 18 19:05 test-1695063941953668928
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh cat /mount-9p/test-1695063941953668928
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-382151 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [dca0c370-a8f3-49ec-aaa2-a22611c3131c] Pending
E0918 19:05:44.329993  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [dca0c370-a8f3-49ec-aaa2-a22611c3131c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [dca0c370-a8f3-49ec-aaa2-a22611c3131c] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [dca0c370-a8f3-49ec-aaa2-a22611c3131c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.025174749s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-382151 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-382151 /tmp/TestFunctionalparallelMountCmdany-port636457112/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 service list -o json
functional_test.go:1493: Took "633.955119ms" to run "out/minikube-linux-arm64 -p functional-382151 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30846
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30846
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-382151 /tmp/TestFunctionalparallelMountCmdspecific-port2918661817/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-382151 /tmp/TestFunctionalparallelMountCmdspecific-port2918661817/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-382151 ssh "sudo umount -f /mount-9p": exit status 1 (381.236474ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-382151 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-382151 /tmp/TestFunctionalparallelMountCmdspecific-port2918661817/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-382151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1050562582/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-382151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1050562582/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-382151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1050562582/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-382151 ssh "findmnt -T" /mount1: (1.237958678s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-382151 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-382151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1050562582/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-382151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1050562582/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-382151 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1050562582/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-382151 version -o=json --components: (1.072292914s)
--- PASS: TestFunctional/parallel/Version/components (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-382151 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-382151
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-382151 image ls --format short --alsologtostderr:
I0918 19:06:16.224525  674983 out.go:296] Setting OutFile to fd 1 ...
I0918 19:06:16.224807  674983 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 19:06:16.224837  674983 out.go:309] Setting ErrFile to fd 2...
I0918 19:06:16.224858  674983 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 19:06:16.225149  674983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17263-642665/.minikube/bin
I0918 19:06:16.225972  674983 config.go:182] Loaded profile config "functional-382151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0918 19:06:16.226191  674983 config.go:182] Loaded profile config "functional-382151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0918 19:06:16.226731  674983 cli_runner.go:164] Run: docker container inspect functional-382151 --format={{.State.Status}}
I0918 19:06:16.247306  674983 ssh_runner.go:195] Run: systemctl --version
I0918 19:06:16.247361  674983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-382151
I0918 19:06:16.278442  674983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/functional-382151/id_rsa Username:docker}
I0918 19:06:16.381854  674983 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-382151 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | alpine             | fa0c6bb795403 | 45.3MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-proxy              | v1.28.2            | 7da62c127fc0f | 69.9MB |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| docker.io/library/nginx                 | latest             | 91582cfffc2d0 | 196MB  |
| gcr.io/google-containers/addon-resizer  | functional-382151  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/kube-controller-manager | v1.28.2            | 89d57b83c1786 | 117MB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/kube-apiserver          | v1.28.2            | 30bb499447fe1 | 121MB  |
| registry.k8s.io/kube-scheduler          | v1.28.2            | 64fc40cee3716 | 59.2MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-382151 image ls --format table --alsologtostderr:
I0918 19:06:16.837752  675115 out.go:296] Setting OutFile to fd 1 ...
I0918 19:06:16.837932  675115 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 19:06:16.837943  675115 out.go:309] Setting ErrFile to fd 2...
I0918 19:06:16.837949  675115 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 19:06:16.838234  675115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17263-642665/.minikube/bin
I0918 19:06:16.838975  675115 config.go:182] Loaded profile config "functional-382151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0918 19:06:16.839139  675115 config.go:182] Loaded profile config "functional-382151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0918 19:06:16.839718  675115 cli_runner.go:164] Run: docker container inspect functional-382151 --format={{.State.Status}}
I0918 19:06:16.862870  675115 ssh_runner.go:195] Run: systemctl --version
I0918 19:06:16.862922  675115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-382151
I0918 19:06:16.886676  675115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/functional-382151/id_rsa Username:docker}
I0918 19:06:16.987132  675115 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-382151 image ls --format json --alsologtostderr:
[{"id":"64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab","registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"59188020"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578
e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"91582cfffc2d0daa6f42adb6fb74665a047310f76a28e9ed5b0185a2d0f362a6","repoDigests":["docker.io/library/nginx@sha256:
6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153","docker.io/library/nginx@sha256:85eabf2757cb5b5b84248d7feb019079501dfd8691fc79b8b1d0ff1591a6270b"],"repoTags":["docker.io/library/nginx:latest"],"size":"196196618"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-382151"],"size":"34114467"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha
256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d","registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"121054158"},{"id":"89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c"
,"repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8","registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"117187380"},{"id":"fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1","repoDigests":["docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70","docker.io/library/nginx@sha256:700873f42f88d156b7f78f32f0a1dc782286eedc0f175d62d90870820dd98790"],"repoTags":["docker.io/library/nginx:alpine"],"size":"45265718"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minik
ube/storage-provisioner:v5"],"size":"29037500"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa","repoDigests":["registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf","registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"69926807"},{"id":"3d1
8732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-382151 image ls --format json --alsologtostderr:
I0918 19:06:16.568941  675045 out.go:296] Setting OutFile to fd 1 ...
I0918 19:06:16.569126  675045 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 19:06:16.569131  675045 out.go:309] Setting ErrFile to fd 2...
I0918 19:06:16.569137  675045 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 19:06:16.569428  675045 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17263-642665/.minikube/bin
I0918 19:06:16.570101  675045 config.go:182] Loaded profile config "functional-382151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0918 19:06:16.570283  675045 config.go:182] Loaded profile config "functional-382151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0918 19:06:16.570805  675045 cli_runner.go:164] Run: docker container inspect functional-382151 --format={{.State.Status}}
I0918 19:06:16.598211  675045 ssh_runner.go:195] Run: systemctl --version
I0918 19:06:16.598266  675045 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-382151
I0918 19:06:16.620539  675045 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/functional-382151/id_rsa Username:docker}
I0918 19:06:16.719456  675045 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-382151 image ls --format yaml --alsologtostderr:
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 91582cfffc2d0daa6f42adb6fb74665a047310f76a28e9ed5b0185a2d0f362a6
repoDigests:
- docker.io/library/nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153
- docker.io/library/nginx@sha256:85eabf2757cb5b5b84248d7feb019079501dfd8691fc79b8b1d0ff1591a6270b
repoTags:
- docker.io/library/nginx:latest
size: "196196618"
- id: 30bb499447fe1bc655853e2e8ac386cdeb28c80263536259cb54f290f9a58d6c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:07742a71be5e2ac5dc434618fa720ba38bebb463e3bdc0c58b600b4f7716bc3d
- registry.k8s.io/kube-apiserver@sha256:6beea2e5531a0606613594fd3ed92d71bbdcef99dd3237522049a0b32cad736c
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "121054158"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-382151
size: "34114467"
- id: 7da62c127fc0f2c3473babe4dd0fe1da874278c4e524a490b1781e3e0e6dddfa
repoDigests:
- registry.k8s.io/kube-proxy@sha256:41c8f92d1cd571e0e36af431f35c78379f84f5daf5b85d43014a9940d697afcf
- registry.k8s.io/kube-proxy@sha256:714d43ef0334cfb0e15ffd89f0b385681374b72a4865be28ff891b6297c015b8
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "69926807"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 89d57b83c17862d0ca2dd214e9e5ad425f8d67ecba32d10b846f8d22d3b5597c
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3c85f8a91743f4c306163137b121c64816c5c15bf2f002c9238ce0a4ac22b5c8
- registry.k8s.io/kube-controller-manager@sha256:6a42ce14d716205a99763f3c732c0a8f0ea041bdbbea7d2dfffcc53dafd7cac4
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "117187380"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1
repoDigests:
- docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70
- docker.io/library/nginx@sha256:700873f42f88d156b7f78f32f0a1dc782286eedc0f175d62d90870820dd98790
repoTags:
- docker.io/library/nginx:alpine
size: "45265718"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 64fc40cee3716a4596d219b360ce536adb7998eaeae3f5dbb774d6503f5039d7
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6511193f8114a2f011790619698efe12a8119ed9a17e2e36f4c1c759ccf173ab
- registry.k8s.io/kube-scheduler@sha256:f673cc4710d8ed6e3bd224b5641d2537d08e19177a291c2d9e189ea16f081c88
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "59188020"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-382151 image ls --format yaml --alsologtostderr:
I0918 19:06:16.235030  674984 out.go:296] Setting OutFile to fd 1 ...
I0918 19:06:16.235267  674984 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 19:06:16.235277  674984 out.go:309] Setting ErrFile to fd 2...
I0918 19:06:16.235283  674984 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 19:06:16.235587  674984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17263-642665/.minikube/bin
I0918 19:06:16.236281  674984 config.go:182] Loaded profile config "functional-382151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0918 19:06:16.236406  674984 config.go:182] Loaded profile config "functional-382151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0918 19:06:16.237066  674984 cli_runner.go:164] Run: docker container inspect functional-382151 --format={{.State.Status}}
I0918 19:06:16.258738  674984 ssh_runner.go:195] Run: systemctl --version
I0918 19:06:16.258798  674984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-382151
I0918 19:06:16.299905  674984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/functional-382151/id_rsa Username:docker}
I0918 19:06:16.398128  674984 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-382151 ssh pgrep buildkitd: exit status 1 (408.954941ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 image build -t localhost/my-image:functional-382151 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-382151 image build -t localhost/my-image:functional-382151 testdata/build --alsologtostderr: (2.70210728s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-382151 image build -t localhost/my-image:functional-382151 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8b7a7aaf441
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-382151
--> ebcba82fdab
Successfully tagged localhost/my-image:functional-382151
ebcba82fdab07121f06ad0b759822694cebaa32a25185b016c5e5a8088bdc897
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-382151 image build -t localhost/my-image:functional-382151 testdata/build --alsologtostderr:
I0918 19:06:16.937256  675128 out.go:296] Setting OutFile to fd 1 ...
I0918 19:06:16.938179  675128 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 19:06:16.938219  675128 out.go:309] Setting ErrFile to fd 2...
I0918 19:06:16.938239  675128 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0918 19:06:16.938513  675128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17263-642665/.minikube/bin
I0918 19:06:16.939225  675128 config.go:182] Loaded profile config "functional-382151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0918 19:06:16.939898  675128 config.go:182] Loaded profile config "functional-382151": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
I0918 19:06:16.940453  675128 cli_runner.go:164] Run: docker container inspect functional-382151 --format={{.State.Status}}
I0918 19:06:16.959272  675128 ssh_runner.go:195] Run: systemctl --version
I0918 19:06:16.959338  675128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-382151
I0918 19:06:16.983388  675128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33425 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/functional-382151/id_rsa Username:docker}
I0918 19:06:17.093705  675128 build_images.go:151] Building image from path: /tmp/build.3216451804.tar
I0918 19:06:17.093774  675128 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0918 19:06:17.104848  675128 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3216451804.tar
I0918 19:06:17.109491  675128 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3216451804.tar: stat -c "%s %y" /var/lib/minikube/build/build.3216451804.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3216451804.tar': No such file or directory
I0918 19:06:17.109526  675128 ssh_runner.go:362] scp /tmp/build.3216451804.tar --> /var/lib/minikube/build/build.3216451804.tar (3072 bytes)
I0918 19:06:17.140447  675128 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3216451804
I0918 19:06:17.151132  675128 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3216451804 -xf /var/lib/minikube/build/build.3216451804.tar
I0918 19:06:17.162652  675128 crio.go:297] Building image: /var/lib/minikube/build/build.3216451804
I0918 19:06:17.162737  675128 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-382151 /var/lib/minikube/build/build.3216451804 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I0918 19:06:19.531351  675128 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-382151 /var/lib/minikube/build/build.3216451804 --cgroup-manager=cgroupfs: (2.368584395s)
I0918 19:06:19.531433  675128 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3216451804
I0918 19:06:19.542433  675128 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3216451804.tar
I0918 19:06:19.553135  675128 build_images.go:207] Built localhost/my-image:functional-382151 from /tmp/build.3216451804.tar
I0918 19:06:19.553169  675128 build_images.go:123] succeeded building to: functional-382151
I0918 19:06:19.553174  675128 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.581521119s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-382151
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 image load --daemon gcr.io/google-containers/addon-resizer:functional-382151 --alsologtostderr
2023/09/18 19:05:57 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-382151 image load --daemon gcr.io/google-containers/addon-resizer:functional-382151 --alsologtostderr: (5.736061155s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.99s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 image load --daemon gcr.io/google-containers/addon-resizer:functional-382151 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-382151 image load --daemon gcr.io/google-containers/addon-resizer:functional-382151 --alsologtostderr: (2.77200355s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.780743987s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-382151
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 image load --daemon gcr.io/google-containers/addon-resizer:functional-382151 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-382151 image load --daemon gcr.io/google-containers/addon-resizer:functional-382151 --alsologtostderr: (3.765662478s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 image save gcr.io/google-containers/addon-resizer:functional-382151 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 image rm gcr.io/google-containers/addon-resizer:functional-382151 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-382151 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.061822899s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-382151
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-382151 image save --daemon gcr.io/google-containers/addon-resizer:functional-382151 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-382151
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.1s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-382151
--- PASS: TestFunctional/delete_addon-resizer_images (0.10s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-382151
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-382151
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (103.06s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-407320 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0918 19:08:00.487414  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-407320 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m43.061205183s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (103.06s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-407320 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-407320 addons enable ingress --alsologtostderr -v=5: (12.483380546s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.69s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-407320 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.69s)

                                                
                                    
x
+
TestJSONOutput/start/Command (73.98s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-875309 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0918 19:11:36.518257  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-875309 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m13.978443783s)
--- PASS: TestJSONOutput/start/Command (73.98s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.84s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-875309 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.84s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-875309 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.92s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-875309 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-875309 --output=json --user=testUser: (5.923979103s)
--- PASS: TestJSONOutput/stop/Command (5.92s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-624108 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-624108 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.882624ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e89d21a8-fe20-43fe-a287-c65cdc28aec8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-624108] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c1c70255-d759-46c6-bc4a-ec613eba66f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17263"}}
	{"specversion":"1.0","id":"fde85bd8-715e-4f29-8406-b377e5211082","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a2bc8d41-c961-4253-b403-c71b5b981143","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig"}}
	{"specversion":"1.0","id":"c1804fad-bb1d-441f-895f-b6858d34a728","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube"}}
	{"specversion":"1.0","id":"6af3bd13-a00b-4e98-807c-8e5f081e384e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c5717e0e-d7af-4e49-8333-260e4ab3f4cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"71d3ded5-43c2-4eba-9961-d47c130b6f18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-624108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-624108
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (45.58s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-543231 --network=
E0918 19:12:58.438403  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
E0918 19:13:00.486470  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:13:19.089859  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
E0918 19:13:19.101422  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
E0918 19:13:19.111945  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
E0918 19:13:19.132735  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
E0918 19:13:19.173366  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
E0918 19:13:19.253644  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
E0918 19:13:19.414366  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
E0918 19:13:19.735279  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
E0918 19:13:20.376237  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
E0918 19:13:21.656508  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
E0918 19:13:24.216649  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
E0918 19:13:29.337689  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-543231 --network=: (43.456210339s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-543231" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-543231
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-543231: (2.096062721s)
--- PASS: TestKicCustomNetwork/create_custom_network (45.58s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.39s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-601073 --network=bridge
E0918 19:13:39.577863  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
E0918 19:14:00.058075  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-601073 --network=bridge: (33.458039106s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-601073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-601073
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-601073: (1.90465915s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.39s)

                                                
                                    
x
+
TestKicExistingNetwork (34.17s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-791347 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-791347 --network=existing-network: (32.053834579s)
helpers_test.go:175: Cleaning up "existing-network-791347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-791347
E0918 19:14:41.018428  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-791347: (1.954085549s)
--- PASS: TestKicExistingNetwork (34.17s)

                                                
                                    
x
+
TestKicCustomSubnet (37.22s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-606475 --subnet=192.168.60.0/24
E0918 19:15:14.595587  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-606475 --subnet=192.168.60.0/24: (35.053084363s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-606475 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-606475" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-606475
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-606475: (2.147437395s)
--- PASS: TestKicCustomSubnet (37.22s)

                                                
                                    
x
+
TestKicStaticIP (36.9s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-590759 --static-ip=192.168.200.200
E0918 19:15:42.283071  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-590759 --static-ip=192.168.200.200: (34.615433559s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-590759 ip
helpers_test.go:175: Cleaning up "static-ip-590759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-590759
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-590759: (2.113522708s)
--- PASS: TestKicStaticIP (36.90s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (74.6s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-856514 --driver=docker  --container-runtime=crio
E0918 19:16:02.939934  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-856514 --driver=docker  --container-runtime=crio: (33.191829444s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-859017 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-859017 --driver=docker  --container-runtime=crio: (35.99210352s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-856514
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-859017
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-859017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-859017
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-859017: (2.043291172s)
helpers_test.go:175: Cleaning up "first-856514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-856514
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-856514: (1.990491171s)
--- PASS: TestMinikubeProfile (74.60s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-848504 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-848504 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.04137962s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-848504 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-850276 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-850276 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (9.058315883s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-850276 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-848504 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-848504 --alsologtostderr -v=5: (1.690303268s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-850276 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-850276
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-850276: (1.241331155s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.77s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-850276
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-850276: (7.773379168s)
--- PASS: TestMountStart/serial/RestartStopped (8.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-850276 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (131.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-689235 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0918 19:18:00.487232  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:18:19.084107  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
E0918 19:18:46.780850  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
E0918 19:19:23.532081  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-689235 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (2m10.825921365s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (131.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689235 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689235 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-689235 -- rollout status deployment/busybox: (3.522613665s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689235 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689235 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689235 -- exec busybox-5bc68d56bd-2bktr -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689235 -- exec busybox-5bc68d56bd-rmmxk -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689235 -- exec busybox-5bc68d56bd-2bktr -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689235 -- exec busybox-5bc68d56bd-rmmxk -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689235 -- exec busybox-5bc68d56bd-2bktr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-689235 -- exec busybox-5bc68d56bd-rmmxk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.18s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (50.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-689235 -v 3 --alsologtostderr
E0918 19:20:14.595681  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-689235 -v 3 --alsologtostderr: (49.82436163s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (50.56s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 cp testdata/cp-test.txt multinode-689235:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 ssh -n multinode-689235 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 cp multinode-689235:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile207353109/001/cp-test_multinode-689235.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 ssh -n multinode-689235 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 cp multinode-689235:/home/docker/cp-test.txt multinode-689235-m02:/home/docker/cp-test_multinode-689235_multinode-689235-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 ssh -n multinode-689235 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 ssh -n multinode-689235-m02 "sudo cat /home/docker/cp-test_multinode-689235_multinode-689235-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 cp multinode-689235:/home/docker/cp-test.txt multinode-689235-m03:/home/docker/cp-test_multinode-689235_multinode-689235-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 ssh -n multinode-689235 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 ssh -n multinode-689235-m03 "sudo cat /home/docker/cp-test_multinode-689235_multinode-689235-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 cp testdata/cp-test.txt multinode-689235-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 ssh -n multinode-689235-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 cp multinode-689235-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile207353109/001/cp-test_multinode-689235-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 ssh -n multinode-689235-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 cp multinode-689235-m02:/home/docker/cp-test.txt multinode-689235:/home/docker/cp-test_multinode-689235-m02_multinode-689235.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 ssh -n multinode-689235-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 ssh -n multinode-689235 "sudo cat /home/docker/cp-test_multinode-689235-m02_multinode-689235.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 cp multinode-689235-m02:/home/docker/cp-test.txt multinode-689235-m03:/home/docker/cp-test_multinode-689235-m02_multinode-689235-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 ssh -n multinode-689235-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 ssh -n multinode-689235-m03 "sudo cat /home/docker/cp-test_multinode-689235-m02_multinode-689235-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 cp testdata/cp-test.txt multinode-689235-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 ssh -n multinode-689235-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 cp multinode-689235-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile207353109/001/cp-test_multinode-689235-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 ssh -n multinode-689235-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 cp multinode-689235-m03:/home/docker/cp-test.txt multinode-689235:/home/docker/cp-test_multinode-689235-m03_multinode-689235.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 ssh -n multinode-689235-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 ssh -n multinode-689235 "sudo cat /home/docker/cp-test_multinode-689235-m03_multinode-689235.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 cp multinode-689235-m03:/home/docker/cp-test.txt multinode-689235-m02:/home/docker/cp-test_multinode-689235-m03_multinode-689235-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 ssh -n multinode-689235-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 ssh -n multinode-689235-m02 "sudo cat /home/docker/cp-test_multinode-689235-m03_multinode-689235-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-689235 node stop m03: (1.23404262s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-689235 status: exit status 7 (587.132834ms)

                                                
                                                
-- stdout --
	multinode-689235
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-689235-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-689235-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-689235 status --alsologtostderr: exit status 7 (596.836437ms)

                                                
                                                
-- stdout --
	multinode-689235
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-689235-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-689235-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 19:21:10.037701  721854 out.go:296] Setting OutFile to fd 1 ...
	I0918 19:21:10.037933  721854 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 19:21:10.037956  721854 out.go:309] Setting ErrFile to fd 2...
	I0918 19:21:10.037976  721854 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 19:21:10.038337  721854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17263-642665/.minikube/bin
	I0918 19:21:10.038579  721854 out.go:303] Setting JSON to false
	I0918 19:21:10.038669  721854 mustload.go:65] Loading cluster: multinode-689235
	I0918 19:21:10.039137  721854 config.go:182] Loaded profile config "multinode-689235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0918 19:21:10.039202  721854 status.go:255] checking status of multinode-689235 ...
	I0918 19:21:10.040067  721854 cli_runner.go:164] Run: docker container inspect multinode-689235 --format={{.State.Status}}
	I0918 19:21:10.043396  721854 notify.go:220] Checking for updates...
	I0918 19:21:10.064250  721854 status.go:330] multinode-689235 host status = "Running" (err=<nil>)
	I0918 19:21:10.064279  721854 host.go:66] Checking if "multinode-689235" exists ...
	I0918 19:21:10.064610  721854 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-689235
	I0918 19:21:10.088136  721854 host.go:66] Checking if "multinode-689235" exists ...
	I0918 19:21:10.088474  721854 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 19:21:10.088531  721854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235
	I0918 19:21:10.119831  721854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33490 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235/id_rsa Username:docker}
	I0918 19:21:10.219736  721854 ssh_runner.go:195] Run: systemctl --version
	I0918 19:21:10.226543  721854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 19:21:10.241320  721854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:21:10.308850  721854 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:55 SystemTime:2023-09-18 19:21:10.298472164 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 19:21:10.309456  721854 kubeconfig.go:92] found "multinode-689235" server: "https://192.168.58.2:8443"
	I0918 19:21:10.309478  721854 api_server.go:166] Checking apiserver status ...
	I0918 19:21:10.309520  721854 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0918 19:21:10.322676  721854 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1253/cgroup
	I0918 19:21:10.334084  721854 api_server.go:182] apiserver freezer: "11:freezer:/docker/e0b155a28412be3d94e22f1ca1010ac124c38296f3bdf609ef8b0f402546fbe5/crio/crio-4518bed1b6e783c08526b0075adcb0b0d9a0ad1cd5c514789c5213d741c870fe"
	I0918 19:21:10.334152  721854 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e0b155a28412be3d94e22f1ca1010ac124c38296f3bdf609ef8b0f402546fbe5/crio/crio-4518bed1b6e783c08526b0075adcb0b0d9a0ad1cd5c514789c5213d741c870fe/freezer.state
	I0918 19:21:10.344310  721854 api_server.go:204] freezer state: "THAWED"
	I0918 19:21:10.344337  721854 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0918 19:21:10.353127  721854 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0918 19:21:10.353157  721854 status.go:421] multinode-689235 apiserver status = Running (err=<nil>)
	I0918 19:21:10.353181  721854 status.go:257] multinode-689235 status: &{Name:multinode-689235 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 19:21:10.353204  721854 status.go:255] checking status of multinode-689235-m02 ...
	I0918 19:21:10.353512  721854 cli_runner.go:164] Run: docker container inspect multinode-689235-m02 --format={{.State.Status}}
	I0918 19:21:10.378459  721854 status.go:330] multinode-689235-m02 host status = "Running" (err=<nil>)
	I0918 19:21:10.378485  721854 host.go:66] Checking if "multinode-689235-m02" exists ...
	I0918 19:21:10.378796  721854 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-689235-m02
	I0918 19:21:10.402060  721854 host.go:66] Checking if "multinode-689235-m02" exists ...
	I0918 19:21:10.402355  721854 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0918 19:21:10.402409  721854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-689235-m02
	I0918 19:21:10.421038  721854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33495 SSHKeyPath:/home/jenkins/minikube-integration/17263-642665/.minikube/machines/multinode-689235-m02/id_rsa Username:docker}
	I0918 19:21:10.518311  721854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0918 19:21:10.532829  721854 status.go:257] multinode-689235-m02 status: &{Name:multinode-689235-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0918 19:21:10.532866  721854 status.go:255] checking status of multinode-689235-m03 ...
	I0918 19:21:10.533198  721854 cli_runner.go:164] Run: docker container inspect multinode-689235-m03 --format={{.State.Status}}
	I0918 19:21:10.551698  721854 status.go:330] multinode-689235-m03 host status = "Stopped" (err=<nil>)
	I0918 19:21:10.551725  721854 status.go:343] host is not running, skipping remaining checks
	I0918 19:21:10.551732  721854 status.go:257] multinode-689235-m03 status: &{Name:multinode-689235-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-689235 node start m03 --alsologtostderr: (12.618598042s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.48s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (121.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-689235
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-689235
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-689235: (25.05281947s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-689235 --wait=true -v=8 --alsologtostderr
E0918 19:23:00.486953  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:23:19.084572  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-689235 --wait=true -v=8 --alsologtostderr: (1m36.769983869s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-689235
--- PASS: TestMultiNode/serial/RestartKeepsNodes (121.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-689235 node delete m03: (4.496429634s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-689235 stop: (23.984822444s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-689235 status: exit status 7 (91.453138ms)

                                                
                                                
-- stdout --
	multinode-689235
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-689235-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-689235 status --alsologtostderr: exit status 7 (86.749995ms)

                                                
                                                
-- stdout --
	multinode-689235
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-689235-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 19:23:55.372675  730111 out.go:296] Setting OutFile to fd 1 ...
	I0918 19:23:55.372819  730111 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 19:23:55.372830  730111 out.go:309] Setting ErrFile to fd 2...
	I0918 19:23:55.372861  730111 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 19:23:55.373129  730111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17263-642665/.minikube/bin
	I0918 19:23:55.373346  730111 out.go:303] Setting JSON to false
	I0918 19:23:55.373401  730111 mustload.go:65] Loading cluster: multinode-689235
	I0918 19:23:55.373533  730111 notify.go:220] Checking for updates...
	I0918 19:23:55.373823  730111 config.go:182] Loaded profile config "multinode-689235": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0918 19:23:55.373836  730111 status.go:255] checking status of multinode-689235 ...
	I0918 19:23:55.374356  730111 cli_runner.go:164] Run: docker container inspect multinode-689235 --format={{.State.Status}}
	I0918 19:23:55.393475  730111 status.go:330] multinode-689235 host status = "Stopped" (err=<nil>)
	I0918 19:23:55.393494  730111 status.go:343] host is not running, skipping remaining checks
	I0918 19:23:55.393511  730111 status.go:257] multinode-689235 status: &{Name:multinode-689235 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0918 19:23:55.393538  730111 status.go:255] checking status of multinode-689235-m02 ...
	I0918 19:23:55.393843  730111 cli_runner.go:164] Run: docker container inspect multinode-689235-m02 --format={{.State.Status}}
	I0918 19:23:55.411646  730111 status.go:330] multinode-689235-m02 host status = "Stopped" (err=<nil>)
	I0918 19:23:55.411667  730111 status.go:343] host is not running, skipping remaining checks
	I0918 19:23:55.411674  730111 status.go:257] multinode-689235-m02 status: &{Name:multinode-689235-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (82.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-689235 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0918 19:25:14.595411  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-689235 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m21.503589641s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-689235 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (82.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-689235
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-689235-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-689235-m02 --driver=docker  --container-runtime=crio: exit status 14 (89.636582ms)

                                                
                                                
-- stdout --
	* [multinode-689235-m02] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-689235-m02' is duplicated with machine name 'multinode-689235-m02' in profile 'multinode-689235'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-689235-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-689235-m03 --driver=docker  --container-runtime=crio: (33.474644239s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-689235
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-689235: exit status 80 (344.382591ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-689235
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-689235-m03 already exists in multinode-689235-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-689235-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-689235-m03: (1.989926609s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.95s)

                                                
                                    
x
+
TestPreload (172.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-222827 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0918 19:26:37.644364  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-222827 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m23.839792775s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-222827 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-222827 image pull gcr.io/k8s-minikube/busybox: (2.1225532s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-222827
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-222827: (5.87376426s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-222827 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0918 19:28:00.494103  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:28:19.084474  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-222827 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m17.536073858s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-222827 image list
helpers_test.go:175: Cleaning up "test-preload-222827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-222827
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-222827: (2.439872561s)
--- PASS: TestPreload (172.07s)

                                                
                                    
x
+
TestScheduledStopUnix (106.75s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-790113 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-790113 --memory=2048 --driver=docker  --container-runtime=crio: (30.990725753s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-790113 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-790113 -n scheduled-stop-790113
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-790113 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-790113 --cancel-scheduled
E0918 19:29:42.142945  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-790113 -n scheduled-stop-790113
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-790113
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-790113 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0918 19:30:14.596359  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-790113
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-790113: exit status 7 (74.1857ms)

                                                
                                                
-- stdout --
	scheduled-stop-790113
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-790113 -n scheduled-stop-790113
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-790113 -n scheduled-stop-790113: exit status 7 (88.655544ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-790113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-790113
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-790113: (4.063496574s)
--- PASS: TestScheduledStopUnix (106.75s)

                                                
                                    
x
+
TestInsufficientStorage (11.32s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-808235 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-808235 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.710654123s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fbf18976-ba45-4feb-863f-4a8f372ce73b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-808235] minikube v1.31.2 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"299f80fa-5442-4ae8-a9b6-8a9a50de5acc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17263"}}
	{"specversion":"1.0","id":"1d52404a-11e9-4555-aab6-45d734ce3e75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a32b4d6d-dfae-4978-bbd7-c0543f59c7a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig"}}
	{"specversion":"1.0","id":"7a605a4b-768a-47d3-b7ba-f46317906632","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube"}}
	{"specversion":"1.0","id":"c9a47967-0144-43b5-965f-cf621523136b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b4bc2d98-1482-49cb-aa53-c507a422314e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3f55395b-b253-4d80-8488-91e981b8f161","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"18592643-3148-473b-a4eb-ba80a26efb68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"966bcd61-6eef-4ad1-9583-0dced48ff964","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"fb180a08-fe3e-4705-8820-475111eb76da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"77c72de8-7206-4880-afe7-11afba88a01d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-808235 in cluster insufficient-storage-808235","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f3513e1-e0ef-4b02-bc8b-b0924a2d929a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e49dff98-1e59-4e88-925f-6794814e345a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2c4db035-e7ea-427a-b1d3-690128b5a9a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-808235 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-808235 --output=json --layout=cluster: exit status 7 (345.245209ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-808235","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-808235","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 19:30:48.189484  746890 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-808235" does not appear in /home/jenkins/minikube-integration/17263-642665/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-808235 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-808235 --output=json --layout=cluster: exit status 7 (314.389467ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-808235","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-808235","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0918 19:30:48.504148  746943 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-808235" does not appear in /home/jenkins/minikube-integration/17263-642665/kubeconfig
	E0918 19:30:48.516387  746943 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/insufficient-storage-808235/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-808235" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-808235
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-808235: (1.948361405s)
--- PASS: TestInsufficientStorage (11.32s)

                                                
                                    
x
+
TestKubernetesUpgrade (397.19s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-707295 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0918 19:33:00.486692  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:33:19.084348  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-707295 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m8.583131863s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-707295
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-707295: (1.296184187s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-707295 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-707295 status --format={{.Host}}: exit status 7 (76.141338ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-707295 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-707295 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m50.939391142s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-707295 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-707295 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-707295 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (110.540563ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-707295] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-707295
	    minikube start -p kubernetes-upgrade-707295 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7072952 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.2, by running:
	    
	    minikube start -p kubernetes-upgrade-707295 --kubernetes-version=v1.28.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-707295 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-707295 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (33.457802959s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-707295" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-707295
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-707295: (2.562912834s)
--- PASS: TestKubernetesUpgrade (397.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-479234 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-479234 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (91.665734ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-479234] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestPause/serial/Start (90.39s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-468214 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-468214 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m30.382262152s)
--- PASS: TestPause/serial/Start (90.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-479234 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-479234 --driver=docker  --container-runtime=crio: (43.194062268s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-479234 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-479234 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-479234 --no-kubernetes --driver=docker  --container-runtime=crio: (4.756870486s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-479234 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-479234 status -o json: exit status 2 (378.331794ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-479234","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-479234
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-479234: (2.001302685s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-479234 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-479234 --no-kubernetes --driver=docker  --container-runtime=crio: (9.382927479s)
--- PASS: TestNoKubernetes/serial/Start (9.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-479234 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-479234 "sudo systemctl is-active --quiet service kubelet": exit status 1 (323.983848ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-479234
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-479234: (1.237646779s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-479234 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-479234 --driver=docker  --container-runtime=crio: (7.636061774s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-479234 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-479234 "sudo systemctl is-active --quiet service kubelet": exit status 1 (306.245444ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.17s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-468214 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-468214 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.098943669s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.17s)

                                                
                                    
x
+
TestPause/serial/Pause (1.16s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-468214 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-468214 --alsologtostderr -v=5: (1.161236867s)
--- PASS: TestPause/serial/Pause (1.16s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-468214 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-468214 --output=json --layout=cluster: exit status 2 (429.455245ms)

                                                
                                                
-- stdout --
	{"Name":"pause-468214","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-468214","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.1s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-468214 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-468214 --alsologtostderr -v=5: (1.097990298s)
--- PASS: TestPause/serial/Unpause (1.10s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.6s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-468214 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-468214 --alsologtostderr -v=5: (1.599872449s)
--- PASS: TestPause/serial/PauseAgain (1.60s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.54s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-468214 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-468214 --alsologtostderr -v=5: (3.535674691s)
--- PASS: TestPause/serial/DeletePaused (3.54s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-468214
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-468214: exit status 1 (35.536752ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-468214: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-311194
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-240505 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-240505 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (207.593204ms)

                                                
                                                
-- stdout --
	* [false-240505] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17263
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0918 19:38:09.599296  781689 out.go:296] Setting OutFile to fd 1 ...
	I0918 19:38:09.599579  781689 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:09.599589  781689 out.go:309] Setting ErrFile to fd 2...
	I0918 19:38:09.599598  781689 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0918 19:38:09.599897  781689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17263-642665/.minikube/bin
	I0918 19:38:09.600364  781689 out.go:303] Setting JSON to false
	I0918 19:38:09.601396  781689 start.go:128] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12035,"bootTime":1695053855,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1044-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0918 19:38:09.601469  781689 start.go:138] virtualization:  
	I0918 19:38:09.604259  781689 out.go:177] * [false-240505] minikube v1.31.2 on Ubuntu 20.04 (arm64)
	I0918 19:38:09.608813  781689 out.go:177]   - MINIKUBE_LOCATION=17263
	I0918 19:38:09.610633  781689 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0918 19:38:09.608966  781689 notify.go:220] Checking for updates...
	I0918 19:38:09.614340  781689 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17263-642665/kubeconfig
	I0918 19:38:09.616505  781689 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17263-642665/.minikube
	I0918 19:38:09.618826  781689 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0918 19:38:09.620611  781689 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0918 19:38:09.623060  781689 config.go:182] Loaded profile config "kubernetes-upgrade-707295": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.2
	I0918 19:38:09.623173  781689 driver.go:373] Setting default libvirt URI to qemu:///system
	I0918 19:38:09.648097  781689 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0918 19:38:09.648209  781689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0918 19:38:09.744724  781689 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-09-18 19:38:09.734890121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1044-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215040000 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I0918 19:38:09.744832  781689 docker.go:294] overlay module found
	I0918 19:38:09.746890  781689 out.go:177] * Using the docker driver based on user configuration
	I0918 19:38:09.749103  781689 start.go:298] selected driver: docker
	I0918 19:38:09.749118  781689 start.go:902] validating driver "docker" against <nil>
	I0918 19:38:09.749132  781689 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0918 19:38:09.751715  781689 out.go:177] 
	W0918 19:38:09.753563  781689 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0918 19:38:09.755509  781689 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-240505 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-240505

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-240505

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-240505

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-240505

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-240505

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-240505

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-240505

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-240505

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-240505

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-240505

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-240505

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-240505" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-240505" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 18 Sep 2023 19:34:26 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-707295
contexts:
- context:
cluster: kubernetes-upgrade-707295
user: kubernetes-upgrade-707295
name: kubernetes-upgrade-707295
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-707295
user:
client-certificate: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/kubernetes-upgrade-707295/client.crt
client-key: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/kubernetes-upgrade-707295/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-240505

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-240505"

                                                
                                                
----------------------- debugLogs end: false-240505 [took: 3.522256058s] --------------------------------
helpers_test.go:175: Cleaning up "false-240505" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-240505
--- PASS: TestNetworkPlugins/group/false (3.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (122.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-546567 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-546567 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m2.290865376s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (122.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-546567 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [51862fab-9a05-4c58-82c1-701e6f7a517d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [51862fab-9a05-4c58-82c1-701e6f7a517d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.035119572s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-546567 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-546567 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-546567 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-546567 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-546567 --alsologtostderr -v=3: (12.125640988s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-546567 -n old-k8s-version-546567
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-546567 -n old-k8s-version-546567: exit status 7 (75.4978ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-546567 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (436.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-546567 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-546567 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m15.852789133s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-546567 -n old-k8s-version-546567
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (436.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (63.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-427688 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-427688 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (1m3.838077695s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (63.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-427688 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5271369a-d783-4a57-92d8-f414a28faed2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5271369a-d783-4a57-92d8-f414a28faed2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.033213973s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-427688 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-427688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-427688 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.085150103s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-427688 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-427688 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-427688 --alsologtostderr -v=3: (12.17677851s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-427688 -n no-preload-427688
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-427688 -n no-preload-427688: exit status 7 (81.213629ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-427688 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (347.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-427688 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E0918 19:45:14.596221  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
E0918 19:46:22.144102  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
E0918 19:48:00.486616  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:48:19.084108  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-427688 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (5m47.337987543s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-427688 -n no-preload-427688
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (347.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-2prdv" [bde6eb75-bdcb-4704-92f6-28f7f3d3f401] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.025590151s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-2prdv" [bde6eb75-bdcb-4704-92f6-28f7f3d3f401] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.014541724s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-546567 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-546567 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-546567 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-546567 --alsologtostderr -v=1: (1.119557899s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-546567 -n old-k8s-version-546567
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-546567 -n old-k8s-version-546567: exit status 2 (415.458648ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-546567 -n old-k8s-version-546567
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-546567 -n old-k8s-version-546567: exit status 2 (340.150252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-546567 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-546567 --alsologtostderr -v=1: (1.29605587s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-546567 -n old-k8s-version-546567
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-546567 -n old-k8s-version-546567
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-064392 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-064392 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (1m23.195060419s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fkh5k" [539a23b5-814a-41e4-a13d-f7d6f45a553d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fkh5k" [539a23b5-814a-41e4-a13d-f7d6f45a553d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.057922662s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (11.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-fkh5k" [539a23b5-814a-41e4-a13d-f7d6f45a553d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.020020037s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-427688 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-427688 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-427688 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-427688 -n no-preload-427688
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-427688 -n no-preload-427688: exit status 2 (389.801723ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-427688 -n no-preload-427688
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-427688 -n no-preload-427688: exit status 2 (365.990126ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-427688 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-427688 -n no-preload-427688
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-427688 -n no-preload-427688
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-326214 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-326214 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (1m19.418564415s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-064392 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [62c83f21-eb6b-4e02-9763-484964a3f727] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [62c83f21-eb6b-4e02-9763-484964a3f727] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.044394985s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-064392 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-064392 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-064392 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.139020316s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-064392 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-064392 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-064392 --alsologtostderr -v=3: (12.12996754s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-064392 -n embed-certs-064392
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-064392 -n embed-certs-064392: exit status 7 (81.841275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-064392 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (347.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-064392 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E0918 19:52:18.297336  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/old-k8s-version-546567/client.crt: no such file or directory
E0918 19:52:18.302735  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/old-k8s-version-546567/client.crt: no such file or directory
E0918 19:52:18.313060  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/old-k8s-version-546567/client.crt: no such file or directory
E0918 19:52:18.333362  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/old-k8s-version-546567/client.crt: no such file or directory
E0918 19:52:18.373637  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/old-k8s-version-546567/client.crt: no such file or directory
E0918 19:52:18.454502  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/old-k8s-version-546567/client.crt: no such file or directory
E0918 19:52:18.614861  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/old-k8s-version-546567/client.crt: no such file or directory
E0918 19:52:18.936007  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/old-k8s-version-546567/client.crt: no such file or directory
E0918 19:52:19.577224  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/old-k8s-version-546567/client.crt: no such file or directory
E0918 19:52:20.858436  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/old-k8s-version-546567/client.crt: no such file or directory
E0918 19:52:23.419126  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/old-k8s-version-546567/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-064392 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (5m47.170786891s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-064392 -n embed-certs-064392
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (347.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-326214 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [968dcd37-2865-4708-9d7e-c07420806ceb] Pending
helpers_test.go:344: "busybox" [968dcd37-2865-4708-9d7e-c07420806ceb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [968dcd37-2865-4708-9d7e-c07420806ceb] Running
E0918 19:52:28.539417  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/old-k8s-version-546567/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.058450251s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-326214 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-326214 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-326214 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.510564422s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-326214 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-326214 --alsologtostderr -v=3
E0918 19:52:38.780563  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/old-k8s-version-546567/client.crt: no such file or directory
E0918 19:52:43.534155  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-326214 --alsologtostderr -v=3: (12.115450166s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-326214 -n default-k8s-diff-port-326214
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-326214 -n default-k8s-diff-port-326214: exit status 7 (72.20769ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-326214 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (635.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-326214 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E0918 19:52:59.260941  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/old-k8s-version-546567/client.crt: no such file or directory
E0918 19:53:00.487487  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 19:53:19.084360  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
E0918 19:53:40.221175  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/old-k8s-version-546567/client.crt: no such file or directory
E0918 19:54:29.489622  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/no-preload-427688/client.crt: no such file or directory
E0918 19:54:29.494941  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/no-preload-427688/client.crt: no such file or directory
E0918 19:54:29.505220  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/no-preload-427688/client.crt: no such file or directory
E0918 19:54:29.525515  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/no-preload-427688/client.crt: no such file or directory
E0918 19:54:29.565835  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/no-preload-427688/client.crt: no such file or directory
E0918 19:54:29.646226  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/no-preload-427688/client.crt: no such file or directory
E0918 19:54:29.806565  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/no-preload-427688/client.crt: no such file or directory
E0918 19:54:30.130184  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/no-preload-427688/client.crt: no such file or directory
E0918 19:54:30.771265  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/no-preload-427688/client.crt: no such file or directory
E0918 19:54:32.052087  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/no-preload-427688/client.crt: no such file or directory
E0918 19:54:34.613215  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/no-preload-427688/client.crt: no such file or directory
E0918 19:54:39.734402  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/no-preload-427688/client.crt: no such file or directory
E0918 19:54:49.975402  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/no-preload-427688/client.crt: no such file or directory
E0918 19:55:02.142426  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/old-k8s-version-546567/client.crt: no such file or directory
E0918 19:55:10.455641  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/no-preload-427688/client.crt: no such file or directory
E0918 19:55:14.596393  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
E0918 19:55:51.416331  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/no-preload-427688/client.crt: no such file or directory
E0918 19:57:13.337546  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/no-preload-427688/client.crt: no such file or directory
E0918 19:57:18.296125  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/old-k8s-version-546567/client.crt: no such file or directory
E0918 19:57:45.983427  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/old-k8s-version-546567/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-326214 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (10m34.766849353s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-326214 -n default-k8s-diff-port-326214
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (635.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8f575" [800a23c5-58df-403f-9da3-efcb2bbc1b52] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8f575" [800a23c5-58df-403f-9da3-efcb2bbc1b52] Running
E0918 19:58:00.486502  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.051190956s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-8f575" [800a23c5-58df-403f-9da3-efcb2bbc1b52] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012281053s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-064392 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-064392 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-064392 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-064392 --alsologtostderr -v=1: (1.035576007s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-064392 -n embed-certs-064392
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-064392 -n embed-certs-064392: exit status 2 (428.529012ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-064392 -n embed-certs-064392
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-064392 -n embed-certs-064392: exit status 2 (347.24594ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-064392 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-064392 -n embed-certs-064392
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-064392 -n embed-certs-064392
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-134747 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E0918 19:58:19.084388  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-134747 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (48.754442924s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-134747 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-134747 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.344005918s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-134747 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-134747 --alsologtostderr -v=3: (2.017741027s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-134747 -n newest-cni-134747
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-134747 -n newest-cni-134747: exit status 7 (86.666664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-134747 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-134747 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2
E0918 19:59:29.489438  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/no-preload-427688/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-134747 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.2: (30.540574077s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-134747 -n newest-cni-134747
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-134747 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-134747 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-134747 -n newest-cni-134747
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-134747 -n newest-cni-134747: exit status 2 (376.528358ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-134747 -n newest-cni-134747
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-134747 -n newest-cni-134747: exit status 2 (371.117623ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-134747 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-134747 -n newest-cni-134747
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-134747 -n newest-cni-134747
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (53.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-240505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0918 19:59:57.178075  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/no-preload-427688/client.crt: no such file or directory
E0918 19:59:57.645683  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
E0918 20:00:14.595539  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/functional-382151/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-240505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (53.353688894s)
--- PASS: TestNetworkPlugins/group/auto/Start (53.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-240505 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-240505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fblwk" [d9784f91-02a4-49a2-b61b-d137eb374441] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fblwk" [d9784f91-02a4-49a2-b61b-d137eb374441] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.010165s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-240505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-240505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-240505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (81.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-240505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E0918 20:02:18.295895  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/old-k8s-version-546567/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-240505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m21.892598229s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (81.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-fm544" [ab8ed669-ed02-4a76-93c8-dab90bdeac9a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.031862412s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-240505 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-240505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-grkk9" [2d50e7ab-86b8-4a4a-ab83-c1e05d908031] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-grkk9" [2d50e7ab-86b8-4a4a-ab83-c1e05d908031] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.010902717s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-240505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-240505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-240505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (81.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-240505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E0918 20:03:19.084774  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-240505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m21.97994266s)
--- PASS: TestNetworkPlugins/group/calico/Start (81.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-b5w8r" [9cf47cbb-0e90-49b3-aeba-6ca3b1955054] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.045386763s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-b5w8r" [9cf47cbb-0e90-49b3-aeba-6ca3b1955054] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.020379269s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-326214 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-326214 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-326214 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-326214 --alsologtostderr -v=1: (1.182021602s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-326214 -n default-k8s-diff-port-326214
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-326214 -n default-k8s-diff-port-326214: exit status 2 (430.754545ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-326214 -n default-k8s-diff-port-326214
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-326214 -n default-k8s-diff-port-326214: exit status 2 (451.293421ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-326214 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-326214 --alsologtostderr -v=1: (1.084926027s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-326214 -n default-k8s-diff-port-326214
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-326214 -n default-k8s-diff-port-326214
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.57s)
E0918 20:07:35.976954  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/kindnet-240505/client.crt: no such file or directory
E0918 20:07:37.257476  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/kindnet-240505/client.crt: no such file or directory
E0918 20:07:39.818358  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/kindnet-240505/client.crt: no such file or directory
E0918 20:07:44.538345  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/default-k8s-diff-port-326214/client.crt: no such file or directory
E0918 20:07:44.939330  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/kindnet-240505/client.crt: no such file or directory
E0918 20:07:55.179544  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/kindnet-240505/client.crt: no such file or directory
E0918 20:08:00.487026  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/addons-351470/client.crt: no such file or directory
E0918 20:08:05.019547  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/default-k8s-diff-port-326214/client.crt: no such file or directory
E0918 20:08:15.660306  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/kindnet-240505/client.crt: no such file or directory
E0918 20:08:19.084256  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/ingress-addon-legacy-407320/client.crt: no such file or directory
E0918 20:08:22.691017  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/auto-240505/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (74.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-240505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E0918 20:04:29.489269  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/no-preload-427688/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-240505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m14.038007684s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (74.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vjxjj" [efb4b99c-eefb-42d6-9ac9-90520717dc53] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.043684047s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-240505 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-240505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9pq68" [ef72b94c-7341-4567-9c44-2077ea6349ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9pq68" [ef72b94c-7341-4567-9c44-2077ea6349ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.015481634s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-240505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-240505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-240505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-240505 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-240505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zmnxw" [c4a841d9-42b5-4165-98ba-405d9134394d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-zmnxw" [c4a841d9-42b5-4165-98ba-405d9134394d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.012018182s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-240505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-240505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-240505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (48.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-240505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-240505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (48.358765204s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (48.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-240505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0918 20:05:38.848986  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/auto-240505/client.crt: no such file or directory
E0918 20:05:38.854235  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/auto-240505/client.crt: no such file or directory
E0918 20:05:38.864493  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/auto-240505/client.crt: no such file or directory
E0918 20:05:38.885463  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/auto-240505/client.crt: no such file or directory
E0918 20:05:38.925716  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/auto-240505/client.crt: no such file or directory
E0918 20:05:39.006003  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/auto-240505/client.crt: no such file or directory
E0918 20:05:39.166410  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/auto-240505/client.crt: no such file or directory
E0918 20:05:39.486685  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/auto-240505/client.crt: no such file or directory
E0918 20:05:40.127740  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/auto-240505/client.crt: no such file or directory
E0918 20:05:41.407907  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/auto-240505/client.crt: no such file or directory
E0918 20:05:43.968105  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/auto-240505/client.crt: no such file or directory
E0918 20:05:49.088608  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/auto-240505/client.crt: no such file or directory
E0918 20:05:59.329333  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/auto-240505/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-240505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m14.238049333s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-240505 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-240505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xrt4p" [ddb0da65-eaca-4085-8096-86341dd912a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xrt4p" [ddb0da65-eaca-4085-8096-86341dd912a4] Running
E0918 20:06:19.809508  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/auto-240505/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.013905931s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (26.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-240505 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-240505 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.255027405s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-240505 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context enable-default-cni-240505 exec deployment/netcat -- nslookup kubernetes.default: (10.204181611s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (26.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-240505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-240505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-m7xdl" [fe9cf34b-248f-4bc8-b291-60f7777611b5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.036821711s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-240505 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-240505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nkdq4" [c4f973fe-5fcf-450a-9927-1d52f0ac5493] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0918 20:07:00.769874  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/auto-240505/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-nkdq4" [c4f973fe-5fcf-450a-9927-1d52f0ac5493] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.014702278s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-240505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-240505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-240505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-240505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E0918 20:07:18.296766  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/old-k8s-version-546567/client.crt: no such file or directory
E0918 20:07:24.057720  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/default-k8s-diff-port-326214/client.crt: no such file or directory
E0918 20:07:24.063114  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/default-k8s-diff-port-326214/client.crt: no such file or directory
E0918 20:07:24.073250  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/default-k8s-diff-port-326214/client.crt: no such file or directory
E0918 20:07:24.093495  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/default-k8s-diff-port-326214/client.crt: no such file or directory
E0918 20:07:24.133732  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/default-k8s-diff-port-326214/client.crt: no such file or directory
E0918 20:07:24.214249  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/default-k8s-diff-port-326214/client.crt: no such file or directory
E0918 20:07:24.374581  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/default-k8s-diff-port-326214/client.crt: no such file or directory
E0918 20:07:24.694977  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/default-k8s-diff-port-326214/client.crt: no such file or directory
E0918 20:07:25.336097  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/default-k8s-diff-port-326214/client.crt: no such file or directory
E0918 20:07:26.616956  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/default-k8s-diff-port-326214/client.crt: no such file or directory
E0918 20:07:29.177205  648003 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/default-k8s-diff-port-326214/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-240505 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m16.226882694s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-240505 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-240505 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fwxt6" [ae2bb34a-5943-4af0-939e-6b925d4940c8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-fwxt6" [ae2bb34a-5943-4af0-939e-6b925d4940c8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.012331456s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-240505 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-240505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-240505 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    

Test skip (29/304)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.65s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-150608 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:234: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-150608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-150608
--- SKIP: TestDownloadOnlyKic (0.65s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-708592" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-708592
--- SKIP: TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-240505 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-240505

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-240505

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-240505

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-240505

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-240505

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-240505

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-240505

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-240505

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-240505

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-240505

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-240505

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-240505" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-240505" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 18 Sep 2023 19:34:26 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-707295
contexts:
- context:
cluster: kubernetes-upgrade-707295
user: kubernetes-upgrade-707295
name: kubernetes-upgrade-707295
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-707295
user:
client-certificate: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/kubernetes-upgrade-707295/client.crt
client-key: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/kubernetes-upgrade-707295/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-240505

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-240505"

                                                
                                                
----------------------- debugLogs end: kubenet-240505 [took: 3.574373519s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-240505" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-240505
--- SKIP: TestNetworkPlugins/group/kubenet (3.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-240505 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-240505

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-240505

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-240505

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-240505

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-240505

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-240505

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-240505

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-240505

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-240505

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-240505

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-240505

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-240505" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-240505

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-240505

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-240505

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-240505

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-240505" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-240505" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17263-642665/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 18 Sep 2023 19:34:26 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-707295
contexts:
- context:
cluster: kubernetes-upgrade-707295
user: kubernetes-upgrade-707295
name: kubernetes-upgrade-707295
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-707295
user:
client-certificate: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/kubernetes-upgrade-707295/client.crt
client-key: /home/jenkins/minikube-integration/17263-642665/.minikube/profiles/kubernetes-upgrade-707295/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-240505

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-240505" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-240505"

                                                
                                                
----------------------- debugLogs end: cilium-240505 [took: 4.008582222s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-240505" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-240505
--- SKIP: TestNetworkPlugins/group/cilium (4.18s)

                                                
                                    
Copied to clipboard