Test Report: Docker_Linux_crio_arm64 17581

                    
                      8f89b804228acd053c87abbbfb2e31f99595775c:2023-11-14:31875
                    
                

Test fail (8/308)

x
+
TestAddons/parallel/Ingress (168.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-008546 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-008546 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-008546 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c8769e1f-b476-4a29-aab4-663db0f18990] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c8769e1f-b476-4a29-aab4-663db0f18990] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.01738394s
addons_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p addons-008546 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-008546 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.293258235s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-008546 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p addons-008546 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.058827656s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-linux-arm64 -p addons-008546 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-arm64 -p addons-008546 addons disable ingress-dns --alsologtostderr -v=1: (1.492243171s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-arm64 -p addons-008546 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-arm64 -p addons-008546 addons disable ingress --alsologtostderr -v=1: (7.808688647s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-008546
helpers_test.go:235: (dbg) docker inspect addons-008546:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "63bca6863ef633f4c623c4491f60a2973659302cae79d0696b0518c489aab6c3",
	        "Created": "2023-11-14T13:34:48.978843359Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1192678,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-14T13:34:49.306623455Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:977f9df3a3e2dccc16de7b5e8115e5e1294a98b99d56135cce7538135e7a7a9d",
	        "ResolvConfPath": "/var/lib/docker/containers/63bca6863ef633f4c623c4491f60a2973659302cae79d0696b0518c489aab6c3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/63bca6863ef633f4c623c4491f60a2973659302cae79d0696b0518c489aab6c3/hostname",
	        "HostsPath": "/var/lib/docker/containers/63bca6863ef633f4c623c4491f60a2973659302cae79d0696b0518c489aab6c3/hosts",
	        "LogPath": "/var/lib/docker/containers/63bca6863ef633f4c623c4491f60a2973659302cae79d0696b0518c489aab6c3/63bca6863ef633f4c623c4491f60a2973659302cae79d0696b0518c489aab6c3-json.log",
	        "Name": "/addons-008546",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-008546:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-008546",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2d6cae31bbed07c2cf4a28df70977f39afb6b39001cbb966cd2db3c41bdf32f3-init/diff:/var/lib/docker/overlay2/ad9b1528ccc99a2a23c8205d781cfd6ce01aa0662a87aad99178910b13bfc77f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2d6cae31bbed07c2cf4a28df70977f39afb6b39001cbb966cd2db3c41bdf32f3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2d6cae31bbed07c2cf4a28df70977f39afb6b39001cbb966cd2db3c41bdf32f3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2d6cae31bbed07c2cf4a28df70977f39afb6b39001cbb966cd2db3c41bdf32f3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-008546",
	                "Source": "/var/lib/docker/volumes/addons-008546/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-008546",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-008546",
	                "name.minikube.sigs.k8s.io": "addons-008546",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6989b40ebe0ea7d3da9c5c0f995e4ad266d7dbea4a21ecb2b9143cbc5d57d8fe",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34279"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34278"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34275"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34277"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34276"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6989b40ebe0e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-008546": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "63bca6863ef6",
	                        "addons-008546"
	                    ],
	                    "NetworkID": "3e84add1c6e149a0ac2aebe43b190d2fa8b2794a167bfd864a578b8e39cd2869",
	                    "EndpointID": "085160d820e22171e57f028868f8a93113f4a5ff6a3188dfc4473c691cda2e36",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-008546 -n addons-008546
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-008546 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-008546 logs -n 25: (1.661560031s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube               | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC | 14 Nov 23 13:34 UTC |
	| delete  | -p download-only-924841                                                                     | download-only-924841   | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC | 14 Nov 23 13:34 UTC |
	| delete  | -p download-only-924841                                                                     | download-only-924841   | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC | 14 Nov 23 13:34 UTC |
	| start   | --download-only -p                                                                          | download-docker-182153 | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC |                     |
	|         | download-docker-182153                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-182153                                                                   | download-docker-182153 | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC | 14 Nov 23 13:34 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-094237   | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC |                     |
	|         | binary-mirror-094237                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:46157                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-094237                                                                     | binary-mirror-094237   | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC | 14 Nov 23 13:34 UTC |
	| addons  | enable dashboard -p                                                                         | addons-008546          | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC |                     |
	|         | addons-008546                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-008546          | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC |                     |
	|         | addons-008546                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-008546 --wait=true                                                                | addons-008546          | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC | 14 Nov 23 13:37 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-008546          | jenkins | v1.32.0 | 14 Nov 23 13:37 UTC | 14 Nov 23 13:37 UTC |
	|         | -p addons-008546                                                                            |                        |         |         |                     |                     |
	| ip      | addons-008546 ip                                                                            | addons-008546          | jenkins | v1.32.0 | 14 Nov 23 13:37 UTC | 14 Nov 23 13:37 UTC |
	| addons  | addons-008546 addons disable                                                                | addons-008546          | jenkins | v1.32.0 | 14 Nov 23 13:37 UTC | 14 Nov 23 13:37 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-008546 ssh cat                                                                       | addons-008546          | jenkins | v1.32.0 | 14 Nov 23 13:37 UTC | 14 Nov 23 13:37 UTC |
	|         | /opt/local-path-provisioner/pvc-07268d80-a275-4e12-8808-af7957f493bf_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-008546 addons disable                                                                | addons-008546          | jenkins | v1.32.0 | 14 Nov 23 13:37 UTC | 14 Nov 23 13:38 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-008546          | jenkins | v1.32.0 | 14 Nov 23 13:37 UTC | 14 Nov 23 13:37 UTC |
	|         | addons-008546                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-008546          | jenkins | v1.32.0 | 14 Nov 23 13:37 UTC | 14 Nov 23 13:37 UTC |
	|         | -p addons-008546                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-008546 addons                                                                        | addons-008546          | jenkins | v1.32.0 | 14 Nov 23 13:38 UTC | 14 Nov 23 13:38 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-008546          | jenkins | v1.32.0 | 14 Nov 23 13:38 UTC | 14 Nov 23 13:38 UTC |
	|         | addons-008546                                                                               |                        |         |         |                     |                     |
	| addons  | addons-008546 addons                                                                        | addons-008546          | jenkins | v1.32.0 | 14 Nov 23 13:38 UTC | 14 Nov 23 13:38 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-008546 addons                                                                        | addons-008546          | jenkins | v1.32.0 | 14 Nov 23 13:38 UTC | 14 Nov 23 13:38 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-008546 ssh curl -s                                                                   | addons-008546          | jenkins | v1.32.0 | 14 Nov 23 13:38 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-008546 ip                                                                            | addons-008546          | jenkins | v1.32.0 | 14 Nov 23 13:40 UTC | 14 Nov 23 13:40 UTC |
	| addons  | addons-008546 addons disable                                                                | addons-008546          | jenkins | v1.32.0 | 14 Nov 23 13:41 UTC | 14 Nov 23 13:41 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-008546 addons disable                                                                | addons-008546          | jenkins | v1.32.0 | 14 Nov 23 13:41 UTC | 14 Nov 23 13:41 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 13:34:25
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 13:34:25.286264 1192192 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:34:25.286457 1192192 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:34:25.286487 1192192 out.go:309] Setting ErrFile to fd 2...
	I1114 13:34:25.286507 1192192 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:34:25.286805 1192192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
	I1114 13:34:25.287269 1192192 out.go:303] Setting JSON to false
	I1114 13:34:25.288122 1192192 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":37012,"bootTime":1699931854,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1114 13:34:25.288222 1192192 start.go:138] virtualization:  
	I1114 13:34:25.291135 1192192 out.go:177] * [addons-008546] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1114 13:34:25.293193 1192192 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 13:34:25.295223 1192192 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 13:34:25.293289 1192192 notify.go:220] Checking for updates...
	I1114 13:34:25.299389 1192192 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 13:34:25.301584 1192192 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	I1114 13:34:25.303303 1192192 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1114 13:34:25.304982 1192192 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 13:34:25.307320 1192192 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 13:34:25.331826 1192192 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1114 13:34:25.331918 1192192 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:34:25.416058 1192192 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-14 13:34:25.406489728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:34:25.416179 1192192 docker.go:295] overlay module found
	I1114 13:34:25.418555 1192192 out.go:177] * Using the docker driver based on user configuration
	I1114 13:34:25.420635 1192192 start.go:298] selected driver: docker
	I1114 13:34:25.420654 1192192 start.go:902] validating driver "docker" against <nil>
	I1114 13:34:25.420668 1192192 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 13:34:25.421289 1192192 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:34:25.489313 1192192 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-14 13:34:25.479298398 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:34:25.489470 1192192 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1114 13:34:25.489692 1192192 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1114 13:34:25.491617 1192192 out.go:177] * Using Docker driver with root privileges
	I1114 13:34:25.493461 1192192 cni.go:84] Creating CNI manager for ""
	I1114 13:34:25.493480 1192192 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1114 13:34:25.493495 1192192 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1114 13:34:25.493511 1192192 start_flags.go:323] config:
	{Name:addons-008546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-008546 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:34:25.497251 1192192 out.go:177] * Starting control plane node addons-008546 in cluster addons-008546
	I1114 13:34:25.499088 1192192 cache.go:121] Beginning downloading kic base image for docker with crio
	I1114 13:34:25.500942 1192192 out.go:177] * Pulling base image ...
	I1114 13:34:25.502695 1192192 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 13:34:25.502750 1192192 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1114 13:34:25.502763 1192192 cache.go:56] Caching tarball of preloaded images
	I1114 13:34:25.502795 1192192 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon
	I1114 13:34:25.502841 1192192 preload.go:174] Found /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1114 13:34:25.502860 1192192 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 13:34:25.503210 1192192 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/config.json ...
	I1114 13:34:25.503242 1192192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/config.json: {Name:mk4aeb1e00227339942b9d7e1357eb8d36170f13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:34:25.520471 1192192 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 to local cache
	I1114 13:34:25.520618 1192192 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local cache directory
	I1114 13:34:25.520707 1192192 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local cache directory, skipping pull
	I1114 13:34:25.520712 1192192 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 exists in cache, skipping pull
	I1114 13:34:25.520720 1192192 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 as a tarball
	I1114 13:34:25.520725 1192192 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 from local cache
	I1114 13:34:41.408724 1192192 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 from cached tarball
	I1114 13:34:41.408762 1192192 cache.go:194] Successfully downloaded all kic artifacts
	I1114 13:34:41.408829 1192192 start.go:365] acquiring machines lock for addons-008546: {Name:mk2981aa330082a522395476d6b33fce1d1e3069 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 13:34:41.408945 1192192 start.go:369] acquired machines lock for "addons-008546" in 92.561µs
	I1114 13:34:41.408979 1192192 start.go:93] Provisioning new machine with config: &{Name:addons-008546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-008546 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 13:34:41.409063 1192192 start.go:125] createHost starting for "" (driver="docker")
	I1114 13:34:41.411348 1192192 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1114 13:34:41.411638 1192192 start.go:159] libmachine.API.Create for "addons-008546" (driver="docker")
	I1114 13:34:41.411689 1192192 client.go:168] LocalClient.Create starting
	I1114 13:34:41.411810 1192192 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem
	I1114 13:34:41.845048 1192192 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem
	I1114 13:34:42.236843 1192192 cli_runner.go:164] Run: docker network inspect addons-008546 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1114 13:34:42.254616 1192192 cli_runner.go:211] docker network inspect addons-008546 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1114 13:34:42.254707 1192192 network_create.go:281] running [docker network inspect addons-008546] to gather additional debugging logs...
	I1114 13:34:42.254730 1192192 cli_runner.go:164] Run: docker network inspect addons-008546
	W1114 13:34:42.275197 1192192 cli_runner.go:211] docker network inspect addons-008546 returned with exit code 1
	I1114 13:34:42.275235 1192192 network_create.go:284] error running [docker network inspect addons-008546]: docker network inspect addons-008546: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-008546 not found
	I1114 13:34:42.275250 1192192 network_create.go:286] output of [docker network inspect addons-008546]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-008546 not found
	
	** /stderr **
	I1114 13:34:42.275389 1192192 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1114 13:34:42.293887 1192192 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400045d3a0}
	I1114 13:34:42.293933 1192192 network_create.go:124] attempt to create docker network addons-008546 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1114 13:34:42.294001 1192192 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-008546 addons-008546
	I1114 13:34:42.371027 1192192 network_create.go:108] docker network addons-008546 192.168.49.0/24 created
	I1114 13:34:42.371062 1192192 kic.go:121] calculated static IP "192.168.49.2" for the "addons-008546" container
	I1114 13:34:42.371137 1192192 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1114 13:34:42.387957 1192192 cli_runner.go:164] Run: docker volume create addons-008546 --label name.minikube.sigs.k8s.io=addons-008546 --label created_by.minikube.sigs.k8s.io=true
	I1114 13:34:42.407142 1192192 oci.go:103] Successfully created a docker volume addons-008546
	I1114 13:34:42.407244 1192192 cli_runner.go:164] Run: docker run --rm --name addons-008546-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-008546 --entrypoint /usr/bin/test -v addons-008546:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -d /var/lib
	I1114 13:34:44.593802 1192192 cli_runner.go:217] Completed: docker run --rm --name addons-008546-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-008546 --entrypoint /usr/bin/test -v addons-008546:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -d /var/lib: (2.186513363s)
	I1114 13:34:44.593835 1192192 oci.go:107] Successfully prepared a docker volume addons-008546
	I1114 13:34:44.593848 1192192 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 13:34:44.593866 1192192 kic.go:194] Starting extracting preloaded images to volume ...
	I1114 13:34:44.593955 1192192 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-008546:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir
	I1114 13:34:48.897219 1192192 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-008546:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir: (4.303221894s)
	I1114 13:34:48.897249 1192192 kic.go:203] duration metric: took 4.303379 seconds to extract preloaded images to volume
	W1114 13:34:48.897394 1192192 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1114 13:34:48.897496 1192192 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1114 13:34:48.962880 1192192 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-008546 --name addons-008546 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-008546 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-008546 --network addons-008546 --ip 192.168.49.2 --volume addons-008546:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24
	I1114 13:34:49.315113 1192192 cli_runner.go:164] Run: docker container inspect addons-008546 --format={{.State.Running}}
	I1114 13:34:49.345200 1192192 cli_runner.go:164] Run: docker container inspect addons-008546 --format={{.State.Status}}
	I1114 13:34:49.378358 1192192 cli_runner.go:164] Run: docker exec addons-008546 stat /var/lib/dpkg/alternatives/iptables
	I1114 13:34:49.432270 1192192 oci.go:144] the created container "addons-008546" has a running status.
	I1114 13:34:49.432302 1192192 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa...
	I1114 13:34:49.654188 1192192 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1114 13:34:49.677629 1192192 cli_runner.go:164] Run: docker container inspect addons-008546 --format={{.State.Status}}
	I1114 13:34:49.707750 1192192 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1114 13:34:49.707773 1192192 kic_runner.go:114] Args: [docker exec --privileged addons-008546 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1114 13:34:49.783548 1192192 cli_runner.go:164] Run: docker container inspect addons-008546 --format={{.State.Status}}
	I1114 13:34:49.813664 1192192 machine.go:88] provisioning docker machine ...
	I1114 13:34:49.813693 1192192 ubuntu.go:169] provisioning hostname "addons-008546"
	I1114 13:34:49.813764 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:34:49.843972 1192192 main.go:141] libmachine: Using SSH client type: native
	I1114 13:34:49.844403 1192192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I1114 13:34:49.844417 1192192 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-008546 && echo "addons-008546" | sudo tee /etc/hostname
	I1114 13:34:49.845176 1192192 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1114 13:34:53.000718 1192192 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-008546
	
	I1114 13:34:53.000835 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:34:53.020022 1192192 main.go:141] libmachine: Using SSH client type: native
	I1114 13:34:53.020430 1192192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I1114 13:34:53.020454 1192192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-008546' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-008546/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-008546' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 13:34:53.161773 1192192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 13:34:53.161799 1192192 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17581-1186318/.minikube CaCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17581-1186318/.minikube}
	I1114 13:34:53.161865 1192192 ubuntu.go:177] setting up certificates
	I1114 13:34:53.161883 1192192 provision.go:83] configureAuth start
	I1114 13:34:53.161956 1192192 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-008546
	I1114 13:34:53.181061 1192192 provision.go:138] copyHostCerts
	I1114 13:34:53.181137 1192192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem (1082 bytes)
	I1114 13:34:53.181262 1192192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem (1123 bytes)
	I1114 13:34:53.181323 1192192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem (1675 bytes)
	I1114 13:34:53.181372 1192192 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem org=jenkins.addons-008546 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-008546]
	I1114 13:34:53.469410 1192192 provision.go:172] copyRemoteCerts
	I1114 13:34:53.469488 1192192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 13:34:53.469538 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:34:53.490820 1192192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa Username:docker}
	I1114 13:34:53.591627 1192192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1114 13:34:53.620061 1192192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 13:34:53.648507 1192192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 13:34:53.676938 1192192 provision.go:86] duration metric: configureAuth took 515.039791ms
	I1114 13:34:53.676964 1192192 ubuntu.go:193] setting minikube options for container-runtime
	I1114 13:34:53.677149 1192192 config.go:182] Loaded profile config "addons-008546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 13:34:53.677255 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:34:53.698391 1192192 main.go:141] libmachine: Using SSH client type: native
	I1114 13:34:53.698824 1192192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34279 <nil> <nil>}
	I1114 13:34:53.698845 1192192 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 13:34:53.960164 1192192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 13:34:53.960202 1192192 machine.go:91] provisioned docker machine in 4.146518614s
	I1114 13:34:53.960219 1192192 client.go:171] LocalClient.Create took 12.548518449s
	I1114 13:34:53.960233 1192192 start.go:167] duration metric: libmachine.API.Create for "addons-008546" took 12.548596208s
	I1114 13:34:53.960244 1192192 start.go:300] post-start starting for "addons-008546" (driver="docker")
	I1114 13:34:53.960257 1192192 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 13:34:53.960335 1192192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 13:34:53.960389 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:34:53.980302 1192192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa Username:docker}
	I1114 13:34:54.084736 1192192 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 13:34:54.089522 1192192 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1114 13:34:54.089562 1192192 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1114 13:34:54.089579 1192192 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1114 13:34:54.089586 1192192 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1114 13:34:54.089597 1192192 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1186318/.minikube/addons for local assets ...
	I1114 13:34:54.089669 1192192 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1186318/.minikube/files for local assets ...
	I1114 13:34:54.089696 1192192 start.go:303] post-start completed in 129.445528ms
	I1114 13:34:54.090006 1192192 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-008546
	I1114 13:34:54.108730 1192192 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/config.json ...
	I1114 13:34:54.109037 1192192 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 13:34:54.109087 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:34:54.130853 1192192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa Username:docker}
	I1114 13:34:54.226603 1192192 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1114 13:34:54.232444 1192192 start.go:128] duration metric: createHost completed in 12.823365075s
	I1114 13:34:54.232471 1192192 start.go:83] releasing machines lock for "addons-008546", held for 12.823513333s
	I1114 13:34:54.232569 1192192 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-008546
	I1114 13:34:54.249935 1192192 ssh_runner.go:195] Run: cat /version.json
	I1114 13:34:54.249960 1192192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 13:34:54.249987 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:34:54.250016 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:34:54.270752 1192192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa Username:docker}
	I1114 13:34:54.283279 1192192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa Username:docker}
	I1114 13:34:54.499712 1192192 ssh_runner.go:195] Run: systemctl --version
	I1114 13:34:54.505591 1192192 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 13:34:54.654276 1192192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1114 13:34:54.660726 1192192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 13:34:54.685120 1192192 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1114 13:34:54.685210 1192192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 13:34:54.726853 1192192 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1114 13:34:54.726876 1192192 start.go:472] detecting cgroup driver to use...
	I1114 13:34:54.726908 1192192 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1114 13:34:54.726963 1192192 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 13:34:54.745129 1192192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 13:34:54.759549 1192192 docker.go:203] disabling cri-docker service (if available) ...
	I1114 13:34:54.759625 1192192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 13:34:54.776686 1192192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 13:34:54.794705 1192192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 13:34:54.885256 1192192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 13:34:54.985809 1192192 docker.go:219] disabling docker service ...
	I1114 13:34:54.985896 1192192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 13:34:55.008939 1192192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 13:34:55.024206 1192192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 13:34:55.120340 1192192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 13:34:55.226354 1192192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 13:34:55.240236 1192192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 13:34:55.260101 1192192 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 13:34:55.260212 1192192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 13:34:55.272980 1192192 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 13:34:55.273088 1192192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 13:34:55.285290 1192192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 13:34:55.296952 1192192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 13:34:55.308896 1192192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 13:34:55.319852 1192192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 13:34:55.330104 1192192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 13:34:55.339931 1192192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 13:34:55.430878 1192192 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 13:34:55.556455 1192192 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 13:34:55.556617 1192192 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 13:34:55.561493 1192192 start.go:540] Will wait 60s for crictl version
	I1114 13:34:55.561591 1192192 ssh_runner.go:195] Run: which crictl
	I1114 13:34:55.565924 1192192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 13:34:55.610379 1192192 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1114 13:34:55.610529 1192192 ssh_runner.go:195] Run: crio --version
	I1114 13:34:55.660135 1192192 ssh_runner.go:195] Run: crio --version
	I1114 13:34:55.705524 1192192 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1114 13:34:55.707523 1192192 cli_runner.go:164] Run: docker network inspect addons-008546 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1114 13:34:55.725151 1192192 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1114 13:34:55.729812 1192192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 13:34:55.743298 1192192 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 13:34:55.743364 1192192 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 13:34:55.813899 1192192 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 13:34:55.813924 1192192 crio.go:415] Images already preloaded, skipping extraction
	I1114 13:34:55.813978 1192192 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 13:34:55.859681 1192192 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 13:34:55.859703 1192192 cache_images.go:84] Images are preloaded, skipping loading
	I1114 13:34:55.859777 1192192 ssh_runner.go:195] Run: crio config
	I1114 13:34:55.917096 1192192 cni.go:84] Creating CNI manager for ""
	I1114 13:34:55.917118 1192192 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1114 13:34:55.917152 1192192 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 13:34:55.917177 1192192 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-008546 NodeName:addons-008546 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 13:34:55.917329 1192192 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-008546"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 13:34:55.917404 1192192 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-008546 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:addons-008546 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 13:34:55.917476 1192192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 13:34:55.929005 1192192 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 13:34:55.929079 1192192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 13:34:55.941408 1192192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I1114 13:34:55.963676 1192192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 13:34:55.987929 1192192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I1114 13:34:56.012460 1192192 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1114 13:34:56.017292 1192192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 13:34:56.031168 1192192 certs.go:56] Setting up /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546 for IP: 192.168.49.2
	I1114 13:34:56.031212 1192192 certs.go:190] acquiring lock for shared ca certs: {Name:mk1fdfc415c611904fd8e5ce757e79f4579c67a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:34:56.031412 1192192 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.key
	I1114 13:34:56.532397 1192192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt ...
	I1114 13:34:56.532430 1192192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt: {Name:mkb00b50c2dec746a96c654447648d5d5b5f5827 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:34:56.532665 1192192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.key ...
	I1114 13:34:56.532680 1192192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.key: {Name:mkcd565c8585e9f3fe050ac0f4251c1f532ff271 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:34:56.532787 1192192 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.key
	I1114 13:34:56.977318 1192192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.crt ...
	I1114 13:34:56.977351 1192192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.crt: {Name:mka7ce89e6639b6ed2353480cf692aa292726d34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:34:56.978098 1192192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.key ...
	I1114 13:34:56.978114 1192192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.key: {Name:mkd5e57396eacebb691e2b93ae3a3f9f5c9f5f4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:34:56.978258 1192192 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.key
	I1114 13:34:56.978276 1192192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt with IP's: []
	I1114 13:34:57.479184 1192192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt ...
	I1114 13:34:57.479216 1192192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: {Name:mk13cc7d1f4969fec30f8a7ff02253e7269607f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:34:57.479400 1192192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.key ...
	I1114 13:34:57.479414 1192192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.key: {Name:mka054563cd7140a3b3d84bf08dc08656a460881 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:34:57.479502 1192192 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/apiserver.key.dd3b5fb2
	I1114 13:34:57.479523 1192192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1114 13:34:58.344963 1192192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/apiserver.crt.dd3b5fb2 ...
	I1114 13:34:58.345009 1192192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/apiserver.crt.dd3b5fb2: {Name:mk20d2b93fab4818912b136bc7e27e0f4b2b66e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:34:58.345250 1192192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/apiserver.key.dd3b5fb2 ...
	I1114 13:34:58.345265 1192192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/apiserver.key.dd3b5fb2: {Name:mk688fb0bc27b4af8a6ca77989a8d75d8e887bfb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:34:58.345393 1192192 certs.go:337] copying /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/apiserver.crt
	I1114 13:34:58.345482 1192192 certs.go:341] copying /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/apiserver.key
	I1114 13:34:58.345548 1192192 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/proxy-client.key
	I1114 13:34:58.345574 1192192 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/proxy-client.crt with IP's: []
	I1114 13:34:59.012330 1192192 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/proxy-client.crt ...
	I1114 13:34:59.012373 1192192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/proxy-client.crt: {Name:mk05f697925149a130950bdfdb4be5d691d1ded2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:34:59.012618 1192192 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/proxy-client.key ...
	I1114 13:34:59.012633 1192192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/proxy-client.key: {Name:mke4b02ae0ad4266a44a7e215d48ea5098896b4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:34:59.013387 1192192 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 13:34:59.013439 1192192 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem (1082 bytes)
	I1114 13:34:59.013471 1192192 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem (1123 bytes)
	I1114 13:34:59.013500 1192192 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem (1675 bytes)
	I1114 13:34:59.014111 1192192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 13:34:59.045948 1192192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1114 13:34:59.077214 1192192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 13:34:59.107198 1192192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1114 13:34:59.136709 1192192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 13:34:59.166611 1192192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 13:34:59.195553 1192192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 13:34:59.224387 1192192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1114 13:34:59.253467 1192192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 13:34:59.282588 1192192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 13:34:59.303614 1192192 ssh_runner.go:195] Run: openssl version
	I1114 13:34:59.311178 1192192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 13:34:59.322955 1192192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:34:59.327913 1192192 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 13:34 /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:34:59.327992 1192192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:34:59.336493 1192192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 13:34:59.348147 1192192 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 13:34:59.352690 1192192 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 13:34:59.352779 1192192 kubeadm.go:404] StartCluster: {Name:addons-008546 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:addons-008546 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:34:59.352857 1192192 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 13:34:59.352924 1192192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 13:34:59.396254 1192192 cri.go:89] found id: ""
	I1114 13:34:59.396331 1192192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 13:34:59.407133 1192192 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 13:34:59.418433 1192192 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1114 13:34:59.418499 1192192 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 13:34:59.429187 1192192 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 13:34:59.429245 1192192 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1114 13:34:59.493322 1192192 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1114 13:34:59.493710 1192192 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 13:34:59.545311 1192192 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1114 13:34:59.545512 1192192 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1114 13:34:59.545604 1192192 kubeadm.go:322] OS: Linux
	I1114 13:34:59.545718 1192192 kubeadm.go:322] CGROUPS_CPU: enabled
	I1114 13:34:59.545784 1192192 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1114 13:34:59.545836 1192192 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1114 13:34:59.545886 1192192 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1114 13:34:59.545934 1192192 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1114 13:34:59.545981 1192192 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1114 13:34:59.546027 1192192 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1114 13:34:59.546082 1192192 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1114 13:34:59.546158 1192192 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1114 13:34:59.629440 1192192 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 13:34:59.629547 1192192 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 13:34:59.629642 1192192 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 13:34:59.893723 1192192 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 13:34:59.898747 1192192 out.go:204]   - Generating certificates and keys ...
	I1114 13:34:59.898983 1192192 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 13:34:59.899083 1192192 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 13:35:00.225867 1192192 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1114 13:35:00.420811 1192192 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1114 13:35:00.787162 1192192 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1114 13:35:01.081086 1192192 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1114 13:35:01.824976 1192192 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1114 13:35:01.825236 1192192 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-008546 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1114 13:35:02.110407 1192192 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1114 13:35:02.110771 1192192 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-008546 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1114 13:35:02.314509 1192192 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1114 13:35:03.246193 1192192 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1114 13:35:03.399335 1192192 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1114 13:35:03.399424 1192192 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 13:35:03.974763 1192192 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 13:35:04.638472 1192192 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 13:35:05.273990 1192192 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 13:35:05.756257 1192192 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 13:35:05.757259 1192192 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 13:35:05.760687 1192192 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 13:35:05.763368 1192192 out.go:204]   - Booting up control plane ...
	I1114 13:35:05.763502 1192192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 13:35:05.763580 1192192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 13:35:05.764446 1192192 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 13:35:05.777642 1192192 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 13:35:05.778555 1192192 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 13:35:05.778823 1192192 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 13:35:05.883418 1192192 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 13:35:13.384863 1192192 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.501883 seconds
	I1114 13:35:13.384979 1192192 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 13:35:13.401465 1192192 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 13:35:13.926948 1192192 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 13:35:13.927169 1192192 kubeadm.go:322] [mark-control-plane] Marking the node addons-008546 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1114 13:35:14.438087 1192192 kubeadm.go:322] [bootstrap-token] Using token: zkqsrv.0hjzijiy2nqbphbq
	I1114 13:35:14.440167 1192192 out.go:204]   - Configuring RBAC rules ...
	I1114 13:35:14.440288 1192192 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 13:35:14.446146 1192192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 13:35:14.455964 1192192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 13:35:14.460472 1192192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 13:35:14.465020 1192192 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 13:35:14.468935 1192192 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 13:35:14.485097 1192192 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 13:35:14.750225 1192192 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 13:35:14.870892 1192192 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 13:35:14.871272 1192192 kubeadm.go:322] 
	I1114 13:35:14.871337 1192192 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 13:35:14.871343 1192192 kubeadm.go:322] 
	I1114 13:35:14.871416 1192192 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 13:35:14.871421 1192192 kubeadm.go:322] 
	I1114 13:35:14.871445 1192192 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 13:35:14.871681 1192192 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 13:35:14.871735 1192192 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 13:35:14.871740 1192192 kubeadm.go:322] 
	I1114 13:35:14.871791 1192192 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1114 13:35:14.871796 1192192 kubeadm.go:322] 
	I1114 13:35:14.871844 1192192 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1114 13:35:14.871849 1192192 kubeadm.go:322] 
	I1114 13:35:14.871898 1192192 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 13:35:14.871969 1192192 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 13:35:14.872033 1192192 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 13:35:14.872038 1192192 kubeadm.go:322] 
	I1114 13:35:14.872117 1192192 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 13:35:14.872197 1192192 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 13:35:14.872202 1192192 kubeadm.go:322] 
	I1114 13:35:14.872281 1192192 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zkqsrv.0hjzijiy2nqbphbq \
	I1114 13:35:14.872377 1192192 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1a1b25420be6487c50639ce0b981e16ee30b54e658d487c3adf6952ff2c4a2c6 \
	I1114 13:35:14.872400 1192192 kubeadm.go:322] 	--control-plane 
	I1114 13:35:14.872404 1192192 kubeadm.go:322] 
	I1114 13:35:14.872483 1192192 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 13:35:14.872488 1192192 kubeadm.go:322] 
	I1114 13:35:14.872618 1192192 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zkqsrv.0hjzijiy2nqbphbq \
	I1114 13:35:14.872715 1192192 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1a1b25420be6487c50639ce0b981e16ee30b54e658d487c3adf6952ff2c4a2c6 
	I1114 13:35:14.876256 1192192 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1114 13:35:14.876461 1192192 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 13:35:14.876497 1192192 cni.go:84] Creating CNI manager for ""
	I1114 13:35:14.876528 1192192 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1114 13:35:14.880511 1192192 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1114 13:35:14.882722 1192192 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1114 13:35:14.900905 1192192 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1114 13:35:14.900922 1192192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1114 13:35:14.973799 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1114 13:35:15.879902 1192192 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 13:35:15.880031 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6d8573efb5a7770e21024de23a29d810b200278b minikube.k8s.io/name=addons-008546 minikube.k8s.io/updated_at=2023_11_14T13_35_15_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:15.880066 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:16.046927 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:16.046940 1192192 ops.go:34] apiserver oom_adj: -16
	I1114 13:35:16.150497 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:16.746192 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:17.245969 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:17.746546 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:18.246342 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:18.746364 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:19.246313 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:19.746747 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:20.246782 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:20.746621 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:21.246524 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:21.746072 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:22.246700 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:22.746536 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:23.246462 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:23.746600 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:24.245819 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:24.746551 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:25.245797 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:25.746162 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:26.246096 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:26.746441 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:27.246014 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:27.746498 1192192 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:35:27.861170 1192192 kubeadm.go:1081] duration metric: took 11.981195413s to wait for elevateKubeSystemPrivileges.
	I1114 13:35:27.861197 1192192 kubeadm.go:406] StartCluster complete in 28.508422863s
	I1114 13:35:27.861215 1192192 settings.go:142] acquiring lock: {Name:mk8b1f62ebfea123b4e39d0037f993206e354b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:35:27.861365 1192192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 13:35:27.861785 1192192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/kubeconfig: {Name:mkf1191f735848932fc7f3417e1088220acbc478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:35:27.862446 1192192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 13:35:27.862734 1192192 config.go:182] Loaded profile config "addons-008546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 13:35:27.862862 1192192 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1114 13:35:27.862942 1192192 addons.go:69] Setting volumesnapshots=true in profile "addons-008546"
	I1114 13:35:27.862960 1192192 addons.go:231] Setting addon volumesnapshots=true in "addons-008546"
	I1114 13:35:27.863030 1192192 host.go:66] Checking if "addons-008546" exists ...
	I1114 13:35:27.863520 1192192 cli_runner.go:164] Run: docker container inspect addons-008546 --format={{.State.Status}}
	I1114 13:35:27.864851 1192192 addons.go:69] Setting ingress-dns=true in profile "addons-008546"
	I1114 13:35:27.864889 1192192 addons.go:231] Setting addon ingress-dns=true in "addons-008546"
	I1114 13:35:27.864963 1192192 host.go:66] Checking if "addons-008546" exists ...
	I1114 13:35:27.865426 1192192 cli_runner.go:164] Run: docker container inspect addons-008546 --format={{.State.Status}}
	I1114 13:35:27.865974 1192192 addons.go:69] Setting inspektor-gadget=true in profile "addons-008546"
	I1114 13:35:27.865990 1192192 addons.go:69] Setting cloud-spanner=true in profile "addons-008546"
	I1114 13:35:27.866000 1192192 addons.go:69] Setting metrics-server=true in profile "addons-008546"
	I1114 13:35:27.866010 1192192 addons.go:231] Setting addon metrics-server=true in "addons-008546"
	I1114 13:35:27.866013 1192192 addons.go:231] Setting addon cloud-spanner=true in "addons-008546"
	I1114 13:35:27.866056 1192192 host.go:66] Checking if "addons-008546" exists ...
	I1114 13:35:27.866063 1192192 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-008546"
	I1114 13:35:27.866071 1192192 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-008546"
	I1114 13:35:27.866091 1192192 host.go:66] Checking if "addons-008546" exists ...
	I1114 13:35:27.866475 1192192 cli_runner.go:164] Run: docker container inspect addons-008546 --format={{.State.Status}}
	I1114 13:35:27.866486 1192192 cli_runner.go:164] Run: docker container inspect addons-008546 --format={{.State.Status}}
	I1114 13:35:27.869123 1192192 addons.go:69] Setting registry=true in profile "addons-008546"
	I1114 13:35:27.869157 1192192 addons.go:231] Setting addon registry=true in "addons-008546"
	I1114 13:35:27.869204 1192192 host.go:66] Checking if "addons-008546" exists ...
	I1114 13:35:27.869618 1192192 cli_runner.go:164] Run: docker container inspect addons-008546 --format={{.State.Status}}
	I1114 13:35:27.866056 1192192 host.go:66] Checking if "addons-008546" exists ...
	I1114 13:35:27.893204 1192192 cli_runner.go:164] Run: docker container inspect addons-008546 --format={{.State.Status}}
	I1114 13:35:27.878610 1192192 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-008546"
	I1114 13:35:27.896224 1192192 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-008546"
	I1114 13:35:27.896311 1192192 host.go:66] Checking if "addons-008546" exists ...
	I1114 13:35:27.896817 1192192 cli_runner.go:164] Run: docker container inspect addons-008546 --format={{.State.Status}}
	I1114 13:35:27.878622 1192192 addons.go:69] Setting default-storageclass=true in profile "addons-008546"
	I1114 13:35:27.903364 1192192 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-008546"
	I1114 13:35:27.906520 1192192 cli_runner.go:164] Run: docker container inspect addons-008546 --format={{.State.Status}}
	I1114 13:35:27.878629 1192192 addons.go:69] Setting gcp-auth=true in profile "addons-008546"
	I1114 13:35:27.915754 1192192 mustload.go:65] Loading cluster: addons-008546
	I1114 13:35:27.878633 1192192 addons.go:69] Setting ingress=true in profile "addons-008546"
	I1114 13:35:27.865995 1192192 addons.go:231] Setting addon inspektor-gadget=true in "addons-008546"
	I1114 13:35:27.884765 1192192 addons.go:69] Setting storage-provisioner=true in profile "addons-008546"
	I1114 13:35:27.884778 1192192 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-008546"
	I1114 13:35:27.924726 1192192 addons.go:231] Setting addon ingress=true in "addons-008546"
	I1114 13:35:27.924815 1192192 host.go:66] Checking if "addons-008546" exists ...
	I1114 13:35:27.928838 1192192 cli_runner.go:164] Run: docker container inspect addons-008546 --format={{.State.Status}}
	I1114 13:35:27.929011 1192192 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-008546"
	I1114 13:35:27.941457 1192192 config.go:182] Loaded profile config "addons-008546": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 13:35:27.941808 1192192 cli_runner.go:164] Run: docker container inspect addons-008546 --format={{.State.Status}}
	I1114 13:35:27.953495 1192192 host.go:66] Checking if "addons-008546" exists ...
	I1114 13:35:27.954076 1192192 cli_runner.go:164] Run: docker container inspect addons-008546 --format={{.State.Status}}
	I1114 13:35:27.974967 1192192 addons.go:231] Setting addon storage-provisioner=true in "addons-008546"
	I1114 13:35:27.975089 1192192 host.go:66] Checking if "addons-008546" exists ...
	I1114 13:35:27.975570 1192192 cli_runner.go:164] Run: docker container inspect addons-008546 --format={{.State.Status}}
	I1114 13:35:28.003144 1192192 cli_runner.go:164] Run: docker container inspect addons-008546 --format={{.State.Status}}
	I1114 13:35:28.049973 1192192 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.2
	I1114 13:35:28.062329 1192192 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1114 13:35:28.062483 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1114 13:35:28.062575 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:35:28.065322 1192192 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1114 13:35:28.075827 1192192 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1114 13:35:28.075916 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1114 13:35:28.076012 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:35:28.065581 1192192 out.go:177]   - Using image docker.io/registry:2.8.3
	I1114 13:35:28.094870 1192192 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1114 13:35:28.065589 1192192 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1114 13:35:28.065594 1192192 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1114 13:35:28.119263 1192192 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1114 13:35:28.119328 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1114 13:35:28.119428 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:35:28.104104 1192192 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1114 13:35:28.138661 1192192 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1114 13:35:28.140804 1192192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1114 13:35:28.137963 1192192 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1114 13:35:28.137977 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1114 13:35:28.139874 1192192 addons.go:231] Setting addon default-storageclass=true in "addons-008546"
	I1114 13:35:28.143684 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1114 13:35:28.143791 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:35:28.146023 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:35:28.154902 1192192 host.go:66] Checking if "addons-008546" exists ...
	I1114 13:35:28.154976 1192192 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1114 13:35:28.156502 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1114 13:35:28.156588 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:35:28.163270 1192192 host.go:66] Checking if "addons-008546" exists ...
	I1114 13:35:28.164819 1192192 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1114 13:35:28.176322 1192192 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1114 13:35:28.164982 1192192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1114 13:35:28.176269 1192192 cli_runner.go:164] Run: docker container inspect addons-008546 --format={{.State.Status}}
	I1114 13:35:28.192294 1192192 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1114 13:35:28.198315 1192192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1114 13:35:28.204679 1192192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1114 13:35:28.198761 1192192 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1114 13:35:28.199070 1192192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 13:35:28.216517 1192192 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.22.0
	I1114 13:35:28.208498 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1114 13:35:28.222200 1192192 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1114 13:35:28.220623 1192192 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1114 13:35:28.220702 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:35:28.228598 1192192 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1114 13:35:28.225116 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1114 13:35:28.230708 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:35:28.250056 1192192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1114 13:35:28.252348 1192192 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1114 13:35:28.259225 1192192 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1114 13:35:28.259247 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1114 13:35:28.259313 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:35:28.261916 1192192 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 13:35:28.272703 1192192 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 13:35:28.272731 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 13:35:28.272802 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:35:28.292347 1192192 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-008546" context rescaled to 1 replicas
	I1114 13:35:28.292382 1192192 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 13:35:28.305819 1192192 out.go:177] * Verifying Kubernetes components...
	I1114 13:35:28.316138 1192192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 13:35:28.314767 1192192 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-008546"
	I1114 13:35:28.316349 1192192 host.go:66] Checking if "addons-008546" exists ...
	I1114 13:35:28.316928 1192192 cli_runner.go:164] Run: docker container inspect addons-008546 --format={{.State.Status}}
	I1114 13:35:28.315236 1192192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa Username:docker}
	I1114 13:35:28.334766 1192192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa Username:docker}
	I1114 13:35:28.408923 1192192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa Username:docker}
	I1114 13:35:28.417559 1192192 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 13:35:28.417580 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 13:35:28.417643 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:35:28.425164 1192192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa Username:docker}
	I1114 13:35:28.432869 1192192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa Username:docker}
	I1114 13:35:28.440458 1192192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa Username:docker}
	I1114 13:35:28.514854 1192192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa Username:docker}
	I1114 13:35:28.518250 1192192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa Username:docker}
	I1114 13:35:28.543945 1192192 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1114 13:35:28.540606 1192192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa Username:docker}
	I1114 13:35:28.540722 1192192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa Username:docker}
	I1114 13:35:28.559056 1192192 out.go:177]   - Using image docker.io/busybox:stable
	I1114 13:35:28.561650 1192192 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1114 13:35:28.561674 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1114 13:35:28.561737 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:35:28.571850 1192192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa Username:docker}
	I1114 13:35:28.592482 1192192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa Username:docker}
	I1114 13:35:28.875453 1192192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1114 13:35:28.898338 1192192 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1114 13:35:28.898371 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1114 13:35:29.016075 1192192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1114 13:35:29.023019 1192192 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1114 13:35:29.023045 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1114 13:35:29.057518 1192192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1114 13:35:29.057540 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1114 13:35:29.061736 1192192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1114 13:35:29.062326 1192192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1114 13:35:29.091047 1192192 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1114 13:35:29.091081 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1114 13:35:29.109532 1192192 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1114 13:35:29.109571 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1114 13:35:29.122445 1192192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1114 13:35:29.148491 1192192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 13:35:29.158258 1192192 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1114 13:35:29.158282 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1114 13:35:29.161436 1192192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 13:35:29.216340 1192192 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1114 13:35:29.216385 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1114 13:35:29.250938 1192192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1114 13:35:29.250963 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1114 13:35:29.298976 1192192 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1114 13:35:29.299007 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1114 13:35:29.310075 1192192 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1114 13:35:29.310100 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1114 13:35:29.347534 1192192 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1114 13:35:29.347576 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1114 13:35:29.387475 1192192 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1114 13:35:29.387502 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1114 13:35:29.406778 1192192 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 13:35:29.406807 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1114 13:35:29.498532 1192192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1114 13:35:29.531095 1192192 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1114 13:35:29.531129 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1114 13:35:29.539098 1192192 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1114 13:35:29.539120 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1114 13:35:29.555553 1192192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1114 13:35:29.556251 1192192 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1114 13:35:29.556270 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1114 13:35:29.718919 1192192 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1114 13:35:29.718956 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1114 13:35:29.733466 1192192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1114 13:35:29.751554 1192192 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1114 13:35:29.751580 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1114 13:35:29.890498 1192192 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1114 13:35:29.890531 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1114 13:35:29.963494 1192192 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1114 13:35:29.963519 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1114 13:35:30.031230 1192192 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1114 13:35:30.031269 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1114 13:35:30.098448 1192192 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1114 13:35:30.098482 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1114 13:35:30.114648 1192192 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1114 13:35:30.114682 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1114 13:35:30.166448 1192192 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1114 13:35:30.166475 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1114 13:35:30.225842 1192192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1114 13:35:30.285906 1192192 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1114 13:35:30.285948 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1114 13:35:30.356807 1192192 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1114 13:35:30.356833 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1114 13:35:30.466184 1192192 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1114 13:35:30.466213 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1114 13:35:30.609031 1192192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1114 13:35:30.653025 1192192 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.444317674s)
	I1114 13:35:30.653066 1192192 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1114 13:35:30.653121 1192192 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.336918192s)
	I1114 13:35:30.654022 1192192 node_ready.go:35] waiting up to 6m0s for node "addons-008546" to be "Ready" ...
	I1114 13:35:31.954388 1192192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.078897063s)
	I1114 13:35:32.835772 1192192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.819662608s)
	I1114 13:35:32.897076 1192192 node_ready.go:58] node "addons-008546" has status "Ready":"False"
	I1114 13:35:33.637570 1192192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.575795358s)
	I1114 13:35:33.637843 1192192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.575494806s)
	I1114 13:35:34.202970 1192192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.080487663s)
	I1114 13:35:34.203004 1192192 addons.go:467] Verifying addon ingress=true in "addons-008546"
	I1114 13:35:34.205305 1192192 out.go:177] * Verifying ingress addon...
	I1114 13:35:34.203218 1192192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.054698997s)
	I1114 13:35:34.203314 1192192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.04185469s)
	I1114 13:35:34.203355 1192192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.704797763s)
	I1114 13:35:34.203424 1192192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.64784409s)
	I1114 13:35:34.203521 1192192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.470025775s)
	I1114 13:35:34.203592 1192192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.977721682s)
	I1114 13:35:34.207577 1192192 addons.go:467] Verifying addon registry=true in "addons-008546"
	I1114 13:35:34.209908 1192192 out.go:177] * Verifying registry addon...
	I1114 13:35:34.207971 1192192 addons.go:467] Verifying addon metrics-server=true in "addons-008546"
	W1114 13:35:34.207995 1192192 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1114 13:35:34.208222 1192192 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1114 13:35:34.213529 1192192 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1114 13:35:34.210027 1192192 retry.go:31] will retry after 265.535163ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1114 13:35:34.220819 1192192 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1114 13:35:34.220893 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:34.232287 1192192 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1114 13:35:34.232349 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:34.234304 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:34.255290 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:34.479620 1192192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1114 13:35:34.488815 1192192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.879731994s)
	I1114 13:35:34.488887 1192192 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-008546"
	I1114 13:35:34.491607 1192192 out.go:177] * Verifying csi-hostpath-driver addon...
	I1114 13:35:34.494490 1192192 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1114 13:35:34.510524 1192192 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1114 13:35:34.510555 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:34.526343 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:34.762302 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:34.768654 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:35.036781 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:35.240093 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:35.264906 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:35.356748 1192192 node_ready.go:58] node "addons-008546" has status "Ready":"False"
	I1114 13:35:35.589886 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:35.796666 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:35.837865 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:36.054629 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:36.156307 1192192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.676631385s)
	I1114 13:35:36.239792 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:36.265486 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:36.531497 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:36.739094 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:36.741680 1192192 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1114 13:35:36.741825 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:35:36.766831 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:36.779631 1192192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa Username:docker}
	I1114 13:35:36.966164 1192192 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1114 13:35:36.995146 1192192 addons.go:231] Setting addon gcp-auth=true in "addons-008546"
	I1114 13:35:36.995194 1192192 host.go:66] Checking if "addons-008546" exists ...
	I1114 13:35:36.995642 1192192 cli_runner.go:164] Run: docker container inspect addons-008546 --format={{.State.Status}}
	I1114 13:35:37.017064 1192192 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1114 13:35:37.017120 1192192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-008546
	I1114 13:35:37.042477 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:37.048152 1192192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34279 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/addons-008546/id_rsa Username:docker}
	I1114 13:35:37.188929 1192192 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1114 13:35:37.191336 1192192 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1114 13:35:37.193661 1192192 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1114 13:35:37.193688 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1114 13:35:37.238956 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:37.242596 1192192 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1114 13:35:37.242659 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1114 13:35:37.265279 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:37.288592 1192192 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1114 13:35:37.288653 1192192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1114 13:35:37.328924 1192192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1114 13:35:37.531656 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:37.739319 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:37.764728 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:37.837248 1192192 node_ready.go:58] node "addons-008546" has status "Ready":"False"
	I1114 13:35:38.049593 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:38.119834 1192192 addons.go:467] Verifying addon gcp-auth=true in "addons-008546"
	I1114 13:35:38.123850 1192192 out.go:177] * Verifying gcp-auth addon...
	I1114 13:35:38.127045 1192192 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1114 13:35:38.144317 1192192 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1114 13:35:38.144336 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:38.150532 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:38.239087 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:38.265174 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:38.532480 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:38.654354 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:38.739385 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:38.764582 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:39.049765 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:39.154857 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:39.238979 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:39.264437 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:39.532332 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:39.655379 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:39.739048 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:39.765449 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:39.837516 1192192 node_ready.go:58] node "addons-008546" has status "Ready":"False"
	I1114 13:35:40.032832 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:40.155709 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:40.240672 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:40.266492 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:40.535380 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:40.654686 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:40.742583 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:40.764817 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:41.031471 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:41.154357 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:41.239093 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:41.264193 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:41.531565 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:41.654393 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:41.738439 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:41.764742 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:42.030964 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:42.154667 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:42.239260 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:42.264797 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:42.338018 1192192 node_ready.go:58] node "addons-008546" has status "Ready":"False"
	I1114 13:35:42.531264 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:42.654734 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:42.747010 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:42.764345 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:43.030547 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:43.154650 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:43.238853 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:43.264822 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:43.530761 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:43.654778 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:43.738711 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:43.764519 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:44.031513 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:44.154412 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:44.239444 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:44.264856 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:44.531027 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:44.654605 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:44.738723 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:44.764583 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:44.837034 1192192 node_ready.go:58] node "addons-008546" has status "Ready":"False"
	I1114 13:35:45.032580 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:45.154501 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:45.239712 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:45.265755 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:45.531357 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:45.654750 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:45.738341 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:45.764478 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:46.031523 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:46.154421 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:46.238837 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:46.264907 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:46.530459 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:46.653989 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:46.738742 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:46.764925 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:47.031558 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:47.154467 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:47.239086 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:47.264137 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:47.337150 1192192 node_ready.go:58] node "addons-008546" has status "Ready":"False"
	I1114 13:35:47.530461 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:47.654406 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:47.738663 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:47.764980 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:48.036113 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:48.154509 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:48.240756 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:48.264653 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:48.530898 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:48.654438 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:48.738656 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:48.764699 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:49.030946 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:49.157672 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:49.238821 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:49.264895 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:49.531405 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:49.653938 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:49.738429 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:49.764174 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:49.837256 1192192 node_ready.go:58] node "addons-008546" has status "Ready":"False"
	I1114 13:35:50.030842 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:50.154896 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:50.238632 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:50.264532 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:50.530985 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:50.653929 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:50.738712 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:50.764711 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:51.031008 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:51.154221 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:51.238955 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:51.264269 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:51.531494 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:51.654341 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:51.738972 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:51.763944 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:52.030944 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:52.153894 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:52.239182 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:52.264100 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:52.337537 1192192 node_ready.go:58] node "addons-008546" has status "Ready":"False"
	I1114 13:35:52.531010 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:52.654777 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:52.738426 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:52.764329 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:53.030948 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:53.154468 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:53.239109 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:53.263908 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:53.531151 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:53.654687 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:53.738878 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:53.764781 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:54.031346 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:54.160920 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:54.239192 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:54.264171 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:54.531317 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:54.655105 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:54.739319 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:54.764279 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:54.837206 1192192 node_ready.go:58] node "addons-008546" has status "Ready":"False"
	I1114 13:35:55.030918 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:55.154685 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:55.243432 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:55.264198 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:55.530572 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:55.654515 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:55.739276 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:55.764231 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:56.031937 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:56.154725 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:56.238449 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:56.264667 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:56.530708 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:56.654838 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:56.738900 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:56.764796 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:57.031658 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:57.154845 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:57.238734 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:57.264765 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:57.336671 1192192 node_ready.go:58] node "addons-008546" has status "Ready":"False"
	I1114 13:35:57.530480 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:57.654005 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:57.739606 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:57.764774 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:58.032581 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:58.154232 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:58.239486 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:58.264579 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:58.532058 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:58.653876 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:58.738669 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:58.764870 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:59.030905 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:59.156380 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:59.238927 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:59.263935 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:35:59.336832 1192192 node_ready.go:58] node "addons-008546" has status "Ready":"False"
	I1114 13:35:59.530998 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:35:59.653777 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:35:59.738419 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:35:59.764749 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:00.036243 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:00.157789 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:00.239659 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:00.265372 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:00.531609 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:00.680810 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:00.747783 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:00.766327 1192192 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1114 13:36:00.766353 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:00.867135 1192192 node_ready.go:49] node "addons-008546" has status "Ready":"True"
	I1114 13:36:00.867175 1192192 node_ready.go:38] duration metric: took 30.213111859s waiting for node "addons-008546" to be "Ready" ...
	I1114 13:36:00.867189 1192192 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 13:36:00.883139 1192192 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-n54k4" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:01.133100 1192192 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1114 13:36:01.133144 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:01.163929 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:01.255086 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:01.290181 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:01.575953 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:01.690967 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:01.747389 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:01.765303 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:02.031875 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:02.155876 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:02.240649 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:02.266411 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:02.532664 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:02.654428 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:02.671698 1192192 pod_ready.go:92] pod "coredns-5dd5756b68-n54k4" in "kube-system" namespace has status "Ready":"True"
	I1114 13:36:02.671726 1192192 pod_ready.go:81] duration metric: took 1.78854272s waiting for pod "coredns-5dd5756b68-n54k4" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:02.671753 1192192 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-008546" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:02.689368 1192192 pod_ready.go:92] pod "etcd-addons-008546" in "kube-system" namespace has status "Ready":"True"
	I1114 13:36:02.689393 1192192 pod_ready.go:81] duration metric: took 17.629321ms waiting for pod "etcd-addons-008546" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:02.689408 1192192 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-008546" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:02.695686 1192192 pod_ready.go:92] pod "kube-apiserver-addons-008546" in "kube-system" namespace has status "Ready":"True"
	I1114 13:36:02.695713 1192192 pod_ready.go:81] duration metric: took 6.295999ms waiting for pod "kube-apiserver-addons-008546" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:02.695725 1192192 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-008546" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:02.706161 1192192 pod_ready.go:92] pod "kube-controller-manager-addons-008546" in "kube-system" namespace has status "Ready":"True"
	I1114 13:36:02.706186 1192192 pod_ready.go:81] duration metric: took 10.452351ms waiting for pod "kube-controller-manager-addons-008546" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:02.706200 1192192 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lcbj5" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:02.741549 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:02.765113 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:02.838777 1192192 pod_ready.go:92] pod "kube-proxy-lcbj5" in "kube-system" namespace has status "Ready":"True"
	I1114 13:36:02.838802 1192192 pod_ready.go:81] duration metric: took 132.594972ms waiting for pod "kube-proxy-lcbj5" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:02.838814 1192192 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-008546" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:03.032929 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:03.156619 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:03.238342 1192192 pod_ready.go:92] pod "kube-scheduler-addons-008546" in "kube-system" namespace has status "Ready":"True"
	I1114 13:36:03.238366 1192192 pod_ready.go:81] duration metric: took 399.544602ms waiting for pod "kube-scheduler-addons-008546" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:03.238378 1192192 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-rdnlc" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:03.241822 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:03.265664 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:03.533125 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:03.655207 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:03.739674 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:03.780368 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:04.033097 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:04.155488 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:04.239564 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:04.270091 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:04.531798 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:04.655198 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:04.743027 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:04.766170 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:05.032903 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:05.155379 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:05.239163 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:05.266931 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:05.532182 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:05.545853 1192192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-rdnlc" in "kube-system" namespace has status "Ready":"False"
	I1114 13:36:05.654713 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:05.739525 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:05.765531 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:06.033908 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:06.160136 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:06.240640 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:06.265422 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:06.535229 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:06.657221 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:06.740497 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:06.777153 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:07.036070 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:07.158339 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:07.245533 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:07.268520 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:07.536589 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:07.554870 1192192 pod_ready.go:102] pod "metrics-server-7c66d45ddc-rdnlc" in "kube-system" namespace has status "Ready":"False"
	I1114 13:36:07.655186 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:07.741529 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:07.768727 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:08.050360 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:08.155374 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:08.241902 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:08.266229 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:08.533533 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:08.658393 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:08.739356 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:08.766983 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:09.034040 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:09.155141 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:09.241231 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:09.296672 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:09.535544 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:09.562271 1192192 pod_ready.go:92] pod "metrics-server-7c66d45ddc-rdnlc" in "kube-system" namespace has status "Ready":"True"
	I1114 13:36:09.562298 1192192 pod_ready.go:81] duration metric: took 6.323913355s waiting for pod "metrics-server-7c66d45ddc-rdnlc" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:09.562311 1192192 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-z7lg9" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:09.654993 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:09.739485 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:09.766142 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:10.034760 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:10.155175 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:10.240956 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:10.265820 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:10.533117 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:10.654940 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:10.739815 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:10.765703 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:11.033042 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:11.154216 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:11.239178 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:11.264918 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:11.534630 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:11.589538 1192192 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-z7lg9" in "kube-system" namespace has status "Ready":"False"
	I1114 13:36:11.655029 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:11.740589 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:11.769473 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:12.033135 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:12.158893 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:12.240470 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:12.265982 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:12.534123 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:12.655731 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:12.738951 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:12.766604 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:13.034267 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:13.154191 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:13.240000 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:13.266607 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:13.533764 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:13.655125 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:13.740068 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:13.765205 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:14.034012 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:14.090974 1192192 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-z7lg9" in "kube-system" namespace has status "Ready":"False"
	I1114 13:36:14.156176 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:14.257025 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:14.265644 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:14.535304 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:14.654366 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:14.738645 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:14.764962 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:15.033529 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:15.154914 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:15.245121 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:15.267179 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:15.531959 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:15.654906 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:15.738808 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:15.765321 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:16.042980 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:16.154681 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:16.240994 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:16.265454 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:16.533157 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:16.587429 1192192 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-z7lg9" in "kube-system" namespace has status "Ready":"False"
	I1114 13:36:16.654291 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:16.738593 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:16.765598 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:17.032962 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:17.155045 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:17.242102 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:17.265178 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:17.532445 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:17.656740 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:17.739996 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:17.765275 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:18.042873 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:18.155424 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:18.239620 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:18.275280 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:18.532994 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:18.588290 1192192 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-z7lg9" in "kube-system" namespace has status "Ready":"False"
	I1114 13:36:18.654541 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:18.739939 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:18.766073 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:19.035109 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:19.159054 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:19.240389 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:19.265900 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:19.534170 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:19.665181 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:19.746617 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:19.766387 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:20.033271 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:20.154986 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:20.239473 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:20.278586 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:20.534108 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:20.589913 1192192 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-z7lg9" in "kube-system" namespace has status "Ready":"True"
	I1114 13:36:20.589939 1192192 pod_ready.go:81] duration metric: took 11.027621284s waiting for pod "nvidia-device-plugin-daemonset-z7lg9" in "kube-system" namespace to be "Ready" ...
	I1114 13:36:20.589961 1192192 pod_ready.go:38] duration metric: took 19.722725284s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 13:36:20.590003 1192192 api_server.go:52] waiting for apiserver process to appear ...
	I1114 13:36:20.590080 1192192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 13:36:20.604959 1192192 api_server.go:72] duration metric: took 52.312545369s to wait for apiserver process to appear ...
	I1114 13:36:20.604985 1192192 api_server.go:88] waiting for apiserver healthz status ...
	I1114 13:36:20.605002 1192192 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1114 13:36:20.613865 1192192 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1114 13:36:20.615173 1192192 api_server.go:141] control plane version: v1.28.3
	I1114 13:36:20.615198 1192192 api_server.go:131] duration metric: took 10.20628ms to wait for apiserver health ...
	I1114 13:36:20.615207 1192192 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 13:36:20.625678 1192192 system_pods.go:59] 18 kube-system pods found
	I1114 13:36:20.625716 1192192 system_pods.go:61] "coredns-5dd5756b68-n54k4" [ec5b3ecb-4e5a-43ad-8532-5aed4edd9942] Running
	I1114 13:36:20.625745 1192192 system_pods.go:61] "csi-hostpath-attacher-0" [f55c6f2a-2672-43af-8bfe-3c26a85fac1c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 13:36:20.625763 1192192 system_pods.go:61] "csi-hostpath-resizer-0" [b71691cb-b0e4-4676-a800-8e4217b53199] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 13:36:20.625773 1192192 system_pods.go:61] "csi-hostpathplugin-fmmzh" [09840aa8-f94d-40a1-a9d5-b7f36836b7f5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 13:36:20.625787 1192192 system_pods.go:61] "etcd-addons-008546" [df26f21c-60db-4cb8-acd0-d16ba705c09d] Running
	I1114 13:36:20.625793 1192192 system_pods.go:61] "kindnet-n46x4" [5de9bb04-fd5d-41d7-85d0-91a5ea4cc9d5] Running
	I1114 13:36:20.625799 1192192 system_pods.go:61] "kube-apiserver-addons-008546" [63d2df1e-7269-4134-83b2-a9847618ebb0] Running
	I1114 13:36:20.625804 1192192 system_pods.go:61] "kube-controller-manager-addons-008546" [f9c62bfc-ed16-4713-bb6b-5b1629c2aaca] Running
	I1114 13:36:20.625830 1192192 system_pods.go:61] "kube-ingress-dns-minikube" [ea9a3530-9aab-4914-8a5c-0753a2ee56f8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 13:36:20.625844 1192192 system_pods.go:61] "kube-proxy-lcbj5" [1778ad55-d059-469e-8add-7c8f82eb026e] Running
	I1114 13:36:20.625851 1192192 system_pods.go:61] "kube-scheduler-addons-008546" [cf177c8b-c95c-4b6e-9345-55de4cb0bb88] Running
	I1114 13:36:20.625856 1192192 system_pods.go:61] "metrics-server-7c66d45ddc-rdnlc" [9c93fef0-fca3-46cd-adf4-ed2436c58e74] Running
	I1114 13:36:20.625865 1192192 system_pods.go:61] "nvidia-device-plugin-daemonset-z7lg9" [39138d17-6ce8-4243-924a-592f11b60525] Running
	I1114 13:36:20.625870 1192192 system_pods.go:61] "registry-6zxk7" [aab81737-f3a1-4831-aa4b-580e8350b7bc] Running
	I1114 13:36:20.625884 1192192 system_pods.go:61] "registry-proxy-szh9q" [5df13a97-1d8b-408c-8786-cb99aa641c8d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 13:36:20.625904 1192192 system_pods.go:61] "snapshot-controller-58dbcc7b99-d5hrl" [36055ad0-492e-4666-8872-07c5161aa3c2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 13:36:20.625920 1192192 system_pods.go:61] "snapshot-controller-58dbcc7b99-zjgxb" [3f05ff66-d51a-4eeb-99e0-cb3b6bc60493] Running
	I1114 13:36:20.625926 1192192 system_pods.go:61] "storage-provisioner" [24955929-a2f6-48aa-9ae4-be5821b1951d] Running
	I1114 13:36:20.625942 1192192 system_pods.go:74] duration metric: took 10.719967ms to wait for pod list to return data ...
	I1114 13:36:20.625957 1192192 default_sa.go:34] waiting for default service account to be created ...
	I1114 13:36:20.630210 1192192 default_sa.go:45] found service account: "default"
	I1114 13:36:20.630235 1192192 default_sa.go:55] duration metric: took 4.271099ms for default service account to be created ...
	I1114 13:36:20.630244 1192192 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 13:36:20.642050 1192192 system_pods.go:86] 18 kube-system pods found
	I1114 13:36:20.642087 1192192 system_pods.go:89] "coredns-5dd5756b68-n54k4" [ec5b3ecb-4e5a-43ad-8532-5aed4edd9942] Running
	I1114 13:36:20.642099 1192192 system_pods.go:89] "csi-hostpath-attacher-0" [f55c6f2a-2672-43af-8bfe-3c26a85fac1c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1114 13:36:20.642108 1192192 system_pods.go:89] "csi-hostpath-resizer-0" [b71691cb-b0e4-4676-a800-8e4217b53199] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1114 13:36:20.642118 1192192 system_pods.go:89] "csi-hostpathplugin-fmmzh" [09840aa8-f94d-40a1-a9d5-b7f36836b7f5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1114 13:36:20.642128 1192192 system_pods.go:89] "etcd-addons-008546" [df26f21c-60db-4cb8-acd0-d16ba705c09d] Running
	I1114 13:36:20.642135 1192192 system_pods.go:89] "kindnet-n46x4" [5de9bb04-fd5d-41d7-85d0-91a5ea4cc9d5] Running
	I1114 13:36:20.642145 1192192 system_pods.go:89] "kube-apiserver-addons-008546" [63d2df1e-7269-4134-83b2-a9847618ebb0] Running
	I1114 13:36:20.642151 1192192 system_pods.go:89] "kube-controller-manager-addons-008546" [f9c62bfc-ed16-4713-bb6b-5b1629c2aaca] Running
	I1114 13:36:20.642160 1192192 system_pods.go:89] "kube-ingress-dns-minikube" [ea9a3530-9aab-4914-8a5c-0753a2ee56f8] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1114 13:36:20.642169 1192192 system_pods.go:89] "kube-proxy-lcbj5" [1778ad55-d059-469e-8add-7c8f82eb026e] Running
	I1114 13:36:20.642176 1192192 system_pods.go:89] "kube-scheduler-addons-008546" [cf177c8b-c95c-4b6e-9345-55de4cb0bb88] Running
	I1114 13:36:20.642187 1192192 system_pods.go:89] "metrics-server-7c66d45ddc-rdnlc" [9c93fef0-fca3-46cd-adf4-ed2436c58e74] Running
	I1114 13:36:20.642193 1192192 system_pods.go:89] "nvidia-device-plugin-daemonset-z7lg9" [39138d17-6ce8-4243-924a-592f11b60525] Running
	I1114 13:36:20.642199 1192192 system_pods.go:89] "registry-6zxk7" [aab81737-f3a1-4831-aa4b-580e8350b7bc] Running
	I1114 13:36:20.642205 1192192 system_pods.go:89] "registry-proxy-szh9q" [5df13a97-1d8b-408c-8786-cb99aa641c8d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1114 13:36:20.642216 1192192 system_pods.go:89] "snapshot-controller-58dbcc7b99-d5hrl" [36055ad0-492e-4666-8872-07c5161aa3c2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1114 13:36:20.642227 1192192 system_pods.go:89] "snapshot-controller-58dbcc7b99-zjgxb" [3f05ff66-d51a-4eeb-99e0-cb3b6bc60493] Running
	I1114 13:36:20.642233 1192192 system_pods.go:89] "storage-provisioner" [24955929-a2f6-48aa-9ae4-be5821b1951d] Running
	I1114 13:36:20.642240 1192192 system_pods.go:126] duration metric: took 11.990245ms to wait for k8s-apps to be running ...
	I1114 13:36:20.642250 1192192 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 13:36:20.642309 1192192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 13:36:20.655763 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:20.658079 1192192 system_svc.go:56] duration metric: took 15.820715ms WaitForService to wait for kubelet.
	I1114 13:36:20.658154 1192192 kubeadm.go:581] duration metric: took 52.365745035s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 13:36:20.658189 1192192 node_conditions.go:102] verifying NodePressure condition ...
	I1114 13:36:20.661830 1192192 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1114 13:36:20.661865 1192192 node_conditions.go:123] node cpu capacity is 2
	I1114 13:36:20.661879 1192192 node_conditions.go:105] duration metric: took 3.654134ms to run NodePressure ...
	I1114 13:36:20.661912 1192192 start.go:228] waiting for startup goroutines ...
	I1114 13:36:20.739247 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:20.766605 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:21.042673 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:21.155297 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:21.240233 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:21.274498 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:21.536966 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:21.654987 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:21.740463 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:21.766099 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:22.034173 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:22.179208 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:22.257606 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:22.271190 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:22.533371 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:22.658007 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:22.742314 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:22.770471 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:23.033076 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:23.156215 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:23.239775 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:23.269769 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:23.533141 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:23.655180 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:23.740089 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:23.765278 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:24.033072 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:24.155568 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:24.240432 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:24.276953 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:24.534337 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:24.656022 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:24.739955 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:24.765021 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:25.035462 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:25.154944 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:25.240606 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:25.289033 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:25.532740 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:25.655363 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:25.742136 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:25.765319 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:26.032101 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:26.154766 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:26.241569 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:26.265531 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:26.533682 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:26.654857 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:26.742161 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:26.765581 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:27.034093 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:27.156266 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:27.239819 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:27.266877 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:27.532889 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:27.654303 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:27.753036 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:27.767546 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:28.040220 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:28.155012 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:28.239128 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:28.265229 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:28.535663 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:28.655963 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:28.744509 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:28.768089 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:29.033383 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:29.155318 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:29.252572 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:29.265994 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:29.532258 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:29.654939 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:29.738574 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:29.766102 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:30.032704 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:30.155278 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:30.238671 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:30.265438 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:30.533040 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:30.654907 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:30.747525 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:30.767726 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:31.033278 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:31.155617 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:31.239301 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:31.268951 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:31.532881 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:31.654480 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:31.747201 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:31.775385 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:32.034774 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:32.155090 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:32.239085 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:32.264708 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:32.533940 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:32.654579 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:32.739828 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:32.766062 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:33.033589 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:33.155533 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:33.239425 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:33.266063 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:33.531916 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:33.657909 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:33.739991 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:33.765461 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:34.039596 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:34.154687 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:34.244477 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:34.266520 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:34.532205 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:34.654438 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:34.738991 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:34.766000 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:35.032961 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:35.154803 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:35.239436 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:35.264799 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:35.531646 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:35.655427 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:35.741774 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:35.765848 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:36.033061 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:36.156181 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:36.240187 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:36.268163 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:36.532645 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:36.655015 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:36.740068 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:36.765464 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:37.033169 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:37.167695 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:37.239302 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:37.267050 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:37.533140 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:37.654817 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:37.739532 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:37.765599 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:38.033609 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:38.155045 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:38.240356 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:38.265939 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:38.534568 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:38.655074 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:38.739148 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:38.765821 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:39.032332 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:39.155149 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:39.239857 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:39.266097 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:39.532734 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:39.654357 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:39.740451 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:39.766966 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1114 13:36:40.036763 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:40.154849 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:40.239723 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:40.272534 1192192 kapi.go:107] duration metric: took 1m6.05899911s to wait for kubernetes.io/minikube-addons=registry ...
	I1114 13:36:40.538092 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:40.656314 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:40.739385 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:41.032612 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:41.154922 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:41.238933 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:41.535982 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:41.655082 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:41.740435 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:42.042516 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:42.155372 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:42.239459 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:42.533438 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:42.655022 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:42.739391 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:43.037632 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:43.157512 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:43.239121 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:43.534504 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:43.655152 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:43.747495 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:44.034688 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:44.155684 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:44.241569 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:44.532357 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:44.655469 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:44.739523 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:45.034525 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:45.155178 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:45.240214 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:45.534929 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:45.658600 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:45.739464 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:46.034983 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:46.155511 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:46.239884 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:46.533521 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:46.654976 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:46.738992 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:47.032839 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:47.154322 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:47.239401 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:47.535326 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:47.655028 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:47.740169 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:48.034820 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:48.155646 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:48.239899 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:48.533649 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1114 13:36:48.654573 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:48.739616 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:49.032331 1192192 kapi.go:107] duration metric: took 1m14.537841992s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1114 13:36:49.155373 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:49.238866 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:49.654916 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:49.739758 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:50.154999 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:50.239406 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:50.654787 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:50.739183 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:51.154998 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:51.239696 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:51.654989 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:51.739915 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:52.155858 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:52.239423 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:52.654339 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:52.740221 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:53.155035 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:53.239616 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:53.654318 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:53.739028 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:54.156078 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:54.239592 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:54.654697 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:54.739898 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:55.154691 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:55.239858 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:55.655016 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:55.738977 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:56.154566 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:56.240441 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:56.654465 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:56.739046 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:57.156115 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:57.239642 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:57.654954 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:57.739913 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:58.155476 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:58.239085 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:58.654097 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:58.739213 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:59.155383 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:59.239041 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:36:59.654642 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:36:59.739179 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:00.154714 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:00.240092 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:00.655557 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:00.739251 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:01.154522 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:01.239065 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:01.654003 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:01.739521 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:02.154726 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:02.239957 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:02.655422 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:02.741379 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:03.154372 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:03.238805 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:03.655393 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:03.739418 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:04.155418 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:04.238820 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:04.654605 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:04.740319 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:05.154731 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:05.239803 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:05.654717 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:05.738741 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:06.155055 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:06.239639 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:06.654603 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:06.739465 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:07.156021 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:07.251417 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:07.654420 1192192 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1114 13:37:07.739900 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:08.155812 1192192 kapi.go:107] duration metric: took 1m30.028765018s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1114 13:37:08.157977 1192192 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-008546 cluster.
	I1114 13:37:08.160022 1192192 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1114 13:37:08.162030 1192192 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1114 13:37:08.239734 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:08.740106 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:09.239565 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:09.739348 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:10.241491 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:10.744163 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:11.239297 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:11.741367 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:12.242038 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:12.740100 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:13.240043 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:13.741429 1192192 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:37:14.240491 1192192 kapi.go:107] duration metric: took 1m40.032262024s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1114 13:37:14.243145 1192192 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, ingress-dns, storage-provisioner-rancher, inspektor-gadget, storage-provisioner, metrics-server, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1114 13:37:14.245241 1192192 addons.go:502] enable addons completed in 1m46.382371015s: enabled=[nvidia-device-plugin cloud-spanner ingress-dns storage-provisioner-rancher inspektor-gadget storage-provisioner metrics-server default-storageclass volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1114 13:37:14.245346 1192192 start.go:233] waiting for cluster config update ...
	I1114 13:37:14.245383 1192192 start.go:242] writing updated cluster config ...
	I1114 13:37:14.245721 1192192 ssh_runner.go:195] Run: rm -f paused
	I1114 13:37:14.348762 1192192 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 13:37:14.350850 1192192 out.go:177] * Done! kubectl is now configured to use "addons-008546" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 14 13:41:04 addons-008546 crio[890]: time="2023-11-14 13:41:04.868268786Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=c88a8ec6-1de6-4709-99ce-acbc76883bfd name=/runtime.v1.ImageService/ImageStatus
	Nov 14 13:41:04 addons-008546 crio[890]: time="2023-11-14 13:41:04.868473765Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7],Size_:28999827,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=c88a8ec6-1de6-4709-99ce-acbc76883bfd name=/runtime.v1.ImageService/ImageStatus
	Nov 14 13:41:04 addons-008546 crio[890]: time="2023-11-14 13:41:04.869519043Z" level=info msg="Creating container: default/hello-world-app-5d77478584-dfpws/hello-world-app" id=5363664d-f2c5-4f69-9daa-00055336090a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 14 13:41:04 addons-008546 crio[890]: time="2023-11-14 13:41:04.869610234Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 14 13:41:04 addons-008546 crio[890]: time="2023-11-14 13:41:04.905547750Z" level=info msg="Stopping container: 50fd7cfa56fe9cb6b59e2ede48c06e9140d8567a8220716cf67bd482132f215c (timeout: 2s)" id=ee627ec5-aea9-42e7-b971-3e24c143ddca name=/runtime.v1.RuntimeService/StopContainer
	Nov 14 13:41:04 addons-008546 crio[890]: time="2023-11-14 13:41:04.955978923Z" level=info msg="Created container 9e734f731597a6e3145f9a8007bedeb6bbbe7048bdfff652eb1791604ccae9f9: default/hello-world-app-5d77478584-dfpws/hello-world-app" id=5363664d-f2c5-4f69-9daa-00055336090a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 14 13:41:04 addons-008546 crio[890]: time="2023-11-14 13:41:04.956661085Z" level=info msg="Starting container: 9e734f731597a6e3145f9a8007bedeb6bbbe7048bdfff652eb1791604ccae9f9" id=8ed13fd2-067d-47b7-b9ed-2c06803a8e15 name=/runtime.v1.RuntimeService/StartContainer
	Nov 14 13:41:04 addons-008546 conmon[9034]: conmon 9e734f731597a6e3145f <ninfo>: container 9051 exited with status 1
	Nov 14 13:41:04 addons-008546 crio[890]: time="2023-11-14 13:41:04.973790672Z" level=info msg="Started container" PID=9051 containerID=9e734f731597a6e3145f9a8007bedeb6bbbe7048bdfff652eb1791604ccae9f9 description=default/hello-world-app-5d77478584-dfpws/hello-world-app id=8ed13fd2-067d-47b7-b9ed-2c06803a8e15 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4f40639eacc5f570ec175437f03cba98bd0792ffe5d29fbc863c62ffab366292
	Nov 14 13:41:05 addons-008546 crio[890]: time="2023-11-14 13:41:05.165292039Z" level=info msg="Removing container: fa3368a11bf062b197e80b4d94d9a5f18241c9be5fcdbecb41d1adeee63f6c92" id=efee531f-ae5a-4de2-a6e6-957570b9985c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 14 13:41:05 addons-008546 crio[890]: time="2023-11-14 13:41:05.190666607Z" level=info msg="Removed container fa3368a11bf062b197e80b4d94d9a5f18241c9be5fcdbecb41d1adeee63f6c92: default/hello-world-app-5d77478584-dfpws/hello-world-app" id=efee531f-ae5a-4de2-a6e6-957570b9985c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 14 13:41:06 addons-008546 crio[890]: time="2023-11-14 13:41:06.927780975Z" level=warning msg="Stopping container 50fd7cfa56fe9cb6b59e2ede48c06e9140d8567a8220716cf67bd482132f215c with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=ee627ec5-aea9-42e7-b971-3e24c143ddca name=/runtime.v1.RuntimeService/StopContainer
	Nov 14 13:41:06 addons-008546 conmon[5584]: conmon 50fd7cfa56fe9cb6b59e <ninfo>: container 5597 exited with status 137
	Nov 14 13:41:07 addons-008546 crio[890]: time="2023-11-14 13:41:07.101218162Z" level=info msg="Stopped container 50fd7cfa56fe9cb6b59e2ede48c06e9140d8567a8220716cf67bd482132f215c: ingress-nginx/ingress-nginx-controller-7c6974c4d8-jfphj/controller" id=ee627ec5-aea9-42e7-b971-3e24c143ddca name=/runtime.v1.RuntimeService/StopContainer
	Nov 14 13:41:07 addons-008546 crio[890]: time="2023-11-14 13:41:07.101749392Z" level=info msg="Stopping pod sandbox: 6d9512afd4b1c6edffffc6b8e617781ea88c5c98e564868ddd186145f4aabc5b" id=f06ba4df-c6a8-4806-b78b-ac9b00e4825f name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 14 13:41:07 addons-008546 crio[890]: time="2023-11-14 13:41:07.105417800Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-N3FX6AJHZH73UV2Z - [0:0]\n:KUBE-HP-HCJ4ZYTAIPTEY234 - [0:0]\n-X KUBE-HP-N3FX6AJHZH73UV2Z\n-X KUBE-HP-HCJ4ZYTAIPTEY234\nCOMMIT\n"
	Nov 14 13:41:07 addons-008546 crio[890]: time="2023-11-14 13:41:07.107005196Z" level=info msg="Closing host port tcp:80"
	Nov 14 13:41:07 addons-008546 crio[890]: time="2023-11-14 13:41:07.107054943Z" level=info msg="Closing host port tcp:443"
	Nov 14 13:41:07 addons-008546 crio[890]: time="2023-11-14 13:41:07.108759048Z" level=info msg="Host port tcp:80 does not have an open socket"
	Nov 14 13:41:07 addons-008546 crio[890]: time="2023-11-14 13:41:07.108790999Z" level=info msg="Host port tcp:443 does not have an open socket"
	Nov 14 13:41:07 addons-008546 crio[890]: time="2023-11-14 13:41:07.108954009Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7c6974c4d8-jfphj Namespace:ingress-nginx ID:6d9512afd4b1c6edffffc6b8e617781ea88c5c98e564868ddd186145f4aabc5b UID:912af1c6-6c73-4d5f-8051-fe7e772ed908 NetNS:/var/run/netns/1759e1c9-4732-4407-b1d6-d4586413a95b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 14 13:41:07 addons-008546 crio[890]: time="2023-11-14 13:41:07.109102513Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7c6974c4d8-jfphj from CNI network \"kindnet\" (type=ptp)"
	Nov 14 13:41:07 addons-008546 crio[890]: time="2023-11-14 13:41:07.134953607Z" level=info msg="Stopped pod sandbox: 6d9512afd4b1c6edffffc6b8e617781ea88c5c98e564868ddd186145f4aabc5b" id=f06ba4df-c6a8-4806-b78b-ac9b00e4825f name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 14 13:41:07 addons-008546 crio[890]: time="2023-11-14 13:41:07.172154992Z" level=info msg="Removing container: 50fd7cfa56fe9cb6b59e2ede48c06e9140d8567a8220716cf67bd482132f215c" id=c8cbf033-d384-484d-a610-28d1d9806bd7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 14 13:41:07 addons-008546 crio[890]: time="2023-11-14 13:41:07.191476566Z" level=info msg="Removed container 50fd7cfa56fe9cb6b59e2ede48c06e9140d8567a8220716cf67bd482132f215c: ingress-nginx/ingress-nginx-controller-7c6974c4d8-jfphj/controller" id=c8cbf033-d384-484d-a610-28d1d9806bd7 name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9e734f731597a       dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79                                                             7 seconds ago       Exited              hello-world-app           2                   4f40639eacc5f       hello-world-app-5d77478584-dfpws
	dd55acce40dc6       docker.io/library/nginx@sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b                              2 minutes ago       Running             nginx                     0                   a24ba5a837c6f       nginx
	cce398e0268c6       ghcr.io/headlamp-k8s/headlamp@sha256:7a9587036bd29304f8f1387a7245556a3c479434670b2ca58e3624d44d2a68c9                        3 minutes ago       Running             headlamp                  0                   c9eadd74c002c       headlamp-777fd4b855-zqndv
	30b9f29b8e6be       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:63b520448091bc94aa4dba00d6b3b3c25e410c4fb73aa46feae5b25f9895abaa                 4 minutes ago       Running             gcp-auth                  0                   ccad0592b5d58       gcp-auth-d4c87556c-jh8wk
	0db7f45920376       af594c6a879f2e441ea446a122296abbbe11aae5547e780f2582fbcda5df271c                                                             4 minutes ago       Exited              patch                     3                   94b3f9d8ea2bb       ingress-nginx-admission-patch-v9bjs
	f2f3cf2ef9672       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:67202a0258c6f81d073f265f449a732c89cc1112a8e80ea27317294df6dce2b5   4 minutes ago       Exited              create                    0                   d3d824fcf806f       ingress-nginx-admission-create-rzvwk
	49b92ff93c0a9       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             5 minutes ago       Running             storage-provisioner       0                   8240c4bef7f33       storage-provisioner
	662ab8cbdbf3a       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                             5 minutes ago       Running             coredns                   0                   40a68ed27ed68       coredns-5dd5756b68-n54k4
	66efff6f8d0bd       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                             5 minutes ago       Running             kindnet-cni               0                   4aeff44683a3a       kindnet-n46x4
	677985142b341       a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd                                                             5 minutes ago       Running             kube-proxy                0                   237d64a1d562e       kube-proxy-lcbj5
	6ebe905ae5912       537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7                                                             6 minutes ago       Running             kube-apiserver            0                   48bbfa0e9af89       kube-apiserver-addons-008546
	7a0cbe31f9395       8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16                                                             6 minutes ago       Running             kube-controller-manager   0                   a80ad538d0988       kube-controller-manager-addons-008546
	2a192f1945960       42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314                                                             6 minutes ago       Running             kube-scheduler            0                   54daa944c8462       kube-scheduler-addons-008546
	3454e89de5651       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                             6 minutes ago       Running             etcd                      0                   72106d2d1db59       etcd-addons-008546
	
	* 
	* ==> coredns [662ab8cbdbf3ad35de05c765faaeaf32c113d0c1f3293e6138ecdebc5ac3a9da] <==
	* [INFO] 10.244.0.19:58137 - 35897 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00008923s
	[INFO] 10.244.0.19:58137 - 47713 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001918824s
	[INFO] 10.244.0.19:47424 - 12993 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.002318304s
	[INFO] 10.244.0.19:58137 - 22382 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001687884s
	[INFO] 10.244.0.19:47424 - 43324 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001557579s
	[INFO] 10.244.0.19:47424 - 33376 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000125883s
	[INFO] 10.244.0.19:58137 - 10811 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000040049s
	[INFO] 10.244.0.19:51866 - 17649 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000097632s
	[INFO] 10.244.0.19:38688 - 53814 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000056041s
	[INFO] 10.244.0.19:51866 - 16449 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000057994s
	[INFO] 10.244.0.19:38688 - 11071 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000057739s
	[INFO] 10.244.0.19:51866 - 58681 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000085874s
	[INFO] 10.244.0.19:38688 - 22504 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051659s
	[INFO] 10.244.0.19:51866 - 16527 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00005536s
	[INFO] 10.244.0.19:38688 - 40604 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000041066s
	[INFO] 10.244.0.19:38688 - 61603 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000059946s
	[INFO] 10.244.0.19:51866 - 48231 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003927s
	[INFO] 10.244.0.19:51866 - 57947 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000049912s
	[INFO] 10.244.0.19:38688 - 31166 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000066133s
	[INFO] 10.244.0.19:38688 - 2452 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000738613s
	[INFO] 10.244.0.19:51866 - 15342 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001202889s
	[INFO] 10.244.0.19:38688 - 1881 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.000847921s
	[INFO] 10.244.0.19:38688 - 34227 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000053759s
	[INFO] 10.244.0.19:51866 - 18138 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001161438s
	[INFO] 10.244.0.19:51866 - 28209 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000060266s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-008546
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-008546
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d8573efb5a7770e21024de23a29d810b200278b
	                    minikube.k8s.io/name=addons-008546
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T13_35_15_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-008546
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 13:35:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-008546
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 13:41:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 13:38:49 +0000   Tue, 14 Nov 2023 13:35:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 13:38:49 +0000   Tue, 14 Nov 2023 13:35:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 13:38:49 +0000   Tue, 14 Nov 2023 13:35:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 13:38:49 +0000   Tue, 14 Nov 2023 13:36:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-008546
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 a968fce8df3c43c6be7def43aa44fd39
	  System UUID:                71f08bbd-3656-440c-9ff5-585059ed86e3
	  Boot ID:                    3bdb9c53-2d63-44b9-be60-6ff1ad471e35
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-dfpws         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  gcp-auth                    gcp-auth-d4c87556c-jh8wk                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m34s
	  headlamp                    headlamp-777fd4b855-zqndv                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 coredns-5dd5756b68-n54k4                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m44s
	  kube-system                 etcd-addons-008546                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m59s
	  kube-system                 kindnet-n46x4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m45s
	  kube-system                 kube-apiserver-addons-008546             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m59s
	  kube-system                 kube-controller-manager-addons-008546    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 kube-proxy-lcbj5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m45s
	  kube-system                 kube-scheduler-addons-008546             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m38s  kube-proxy       
	  Normal  Starting                 5m58s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m58s  kubelet          Node addons-008546 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s  kubelet          Node addons-008546 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s  kubelet          Node addons-008546 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m45s  node-controller  Node addons-008546 event: Registered Node addons-008546 in Controller
	  Normal  NodeReady                5m12s  kubelet          Node addons-008546 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001137] FS-Cache: O-key=[8] 'bf623b0000000000'
	[  +0.000766] FS-Cache: N-cookie c=00000054 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001067] FS-Cache: N-cookie d=00000000fbc4fe34{9p.inode} n=00000000710f4086
	[  +0.001133] FS-Cache: N-key=[8] 'bf623b0000000000'
	[  +0.002791] FS-Cache: Duplicate cookie detected
	[  +0.000773] FS-Cache: O-cookie c=0000004e [p=0000004b fl=226 nc=0 na=1]
	[  +0.001045] FS-Cache: O-cookie d=00000000fbc4fe34{9p.inode} n=00000000bb6723c3
	[  +0.001133] FS-Cache: O-key=[8] 'bf623b0000000000'
	[  +0.000784] FS-Cache: N-cookie c=00000055 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001020] FS-Cache: N-cookie d=00000000fbc4fe34{9p.inode} n=00000000b36662d3
	[  +0.001135] FS-Cache: N-key=[8] 'bf623b0000000000'
	[  +2.396006] FS-Cache: Duplicate cookie detected
	[  +0.000739] FS-Cache: O-cookie c=0000004c [p=0000004b fl=226 nc=0 na=1]
	[  +0.001029] FS-Cache: O-cookie d=00000000fbc4fe34{9p.inode} n=00000000a441efb8
	[  +0.001157] FS-Cache: O-key=[8] 'be623b0000000000'
	[  +0.000755] FS-Cache: N-cookie c=00000057 [p=0000004b fl=2 nc=0 na=1]
	[  +0.000998] FS-Cache: N-cookie d=00000000fbc4fe34{9p.inode} n=000000005b6bfa59
	[  +0.001103] FS-Cache: N-key=[8] 'be623b0000000000'
	[  +0.426089] FS-Cache: Duplicate cookie detected
	[  +0.000755] FS-Cache: O-cookie c=00000051 [p=0000004b fl=226 nc=0 na=1]
	[  +0.001038] FS-Cache: O-cookie d=00000000fbc4fe34{9p.inode} n=000000002244812d
	[  +0.001198] FS-Cache: O-key=[8] 'c4623b0000000000'
	[  +0.000752] FS-Cache: N-cookie c=00000058 [p=0000004b fl=2 nc=0 na=1]
	[  +0.001102] FS-Cache: N-cookie d=00000000fbc4fe34{9p.inode} n=0000000038574f41
	[  +0.001111] FS-Cache: N-key=[8] 'c4623b0000000000'
	
	* 
	* ==> etcd [3454e89de5651fef5b9886dad4461afa675bb3cf3c7798c1ee0abebf9f29cdfc] <==
	* {"level":"info","ts":"2023-11-14T13:35:07.992762Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-14T13:35:07.9928Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-14T13:35:07.992831Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T13:35:07.993735Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-14T13:35:08.005434Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-11-14T13:35:29.070517Z","caller":"traceutil/trace.go:171","msg":"trace[1715108252] transaction","detail":"{read_only:false; response_revision:354; number_of_response:1; }","duration":"254.211452ms","start":"2023-11-14T13:35:28.816272Z","end":"2023-11-14T13:35:29.070483Z","steps":["trace[1715108252] 'process raft request'  (duration: 254.037726ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-14T13:35:29.104031Z","caller":"traceutil/trace.go:171","msg":"trace[239322227] transaction","detail":"{read_only:false; response_revision:355; number_of_response:1; }","duration":"153.793456ms","start":"2023-11-14T13:35:28.950217Z","end":"2023-11-14T13:35:29.10401Z","steps":["trace[239322227] 'process raft request'  (duration: 153.7064ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T13:35:29.297061Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.711112ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-5dd5756b68\" ","response":"range_response_count:1 size:3635"}
	{"level":"info","ts":"2023-11-14T13:35:29.297138Z","caller":"traceutil/trace.go:171","msg":"trace[1959830483] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-5dd5756b68; range_end:; response_count:1; response_revision:355; }","duration":"154.853823ms","start":"2023-11-14T13:35:29.142268Z","end":"2023-11-14T13:35:29.297122Z","steps":["trace[1959830483] 'agreement among raft nodes before linearized reading'  (duration: 38.226984ms)","trace[1959830483] 'range keys from in-memory index tree'  (duration: 116.49733ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-14T13:35:32.705182Z","caller":"traceutil/trace.go:171","msg":"trace[1248839261] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"119.68173ms","start":"2023-11-14T13:35:32.585482Z","end":"2023-11-14T13:35:32.705163Z","steps":["trace[1248839261] 'process raft request'  (duration: 87.907184ms)","trace[1248839261] 'compare'  (duration: 25.990823ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-14T13:35:32.705326Z","caller":"traceutil/trace.go:171","msg":"trace[103528636] linearizableReadLoop","detail":"{readStateIndex:433; appliedIndex:431; }","duration":"111.716824ms","start":"2023-11-14T13:35:32.593603Z","end":"2023-11-14T13:35:32.705319Z","steps":["trace[103528636] 'read index received'  (duration: 18.891725ms)","trace[103528636] 'applied index is now lower than readState.Index'  (duration: 92.824426ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-14T13:35:32.705647Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.055571ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-14T13:35:32.705682Z","caller":"traceutil/trace.go:171","msg":"trace[1727365840] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:0; response_revision:419; }","duration":"112.099ms","start":"2023-11-14T13:35:32.593574Z","end":"2023-11-14T13:35:32.705673Z","steps":["trace[1727365840] 'agreement among raft nodes before linearized reading'  (duration: 112.022627ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T13:35:32.801327Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.01256ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-ingress-dns-minikube\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-14T13:35:32.8014Z","caller":"traceutil/trace.go:171","msg":"trace[645124976] range","detail":"{range_begin:/registry/pods/kube-system/kube-ingress-dns-minikube; range_end:; response_count:0; response_revision:426; }","duration":"101.20053ms","start":"2023-11-14T13:35:32.700177Z","end":"2023-11-14T13:35:32.801377Z","steps":["trace[645124976] 'agreement among raft nodes before linearized reading'  (duration: 100.985893ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T13:35:32.801559Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.935985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/metrics-server\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-14T13:35:32.801587Z","caller":"traceutil/trace.go:171","msg":"trace[113712060] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/metrics-server; range_end:; response_count:0; response_revision:426; }","duration":"101.977724ms","start":"2023-11-14T13:35:32.699603Z","end":"2023-11-14T13:35:32.80158Z","steps":["trace[113712060] 'agreement among raft nodes before linearized reading'  (duration: 101.919165ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T13:35:32.801734Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.239877ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replication-controller\" ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2023-11-14T13:35:32.801763Z","caller":"traceutil/trace.go:171","msg":"trace[1038407312] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replication-controller; range_end:; response_count:1; response_revision:426; }","duration":"102.269136ms","start":"2023-11-14T13:35:32.699486Z","end":"2023-11-14T13:35:32.801755Z","steps":["trace[1038407312] 'agreement among raft nodes before linearized reading'  (duration: 102.199959ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T13:35:32.802004Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.537099ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/system:persistent-volume-provisioner\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-14T13:35:32.802035Z","caller":"traceutil/trace.go:171","msg":"trace[240428172] range","detail":"{range_begin:/registry/roles/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:426; }","duration":"108.59492ms","start":"2023-11-14T13:35:32.693433Z","end":"2023-11-14T13:35:32.802028Z","steps":["trace[240428172] 'agreement among raft nodes before linearized reading'  (duration: 108.546502ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T13:35:32.827871Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.374025ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gadget\" ","response":"range_response_count:1 size:573"}
	{"level":"info","ts":"2023-11-14T13:35:32.841044Z","caller":"traceutil/trace.go:171","msg":"trace[196740629] range","detail":"{range_begin:/registry/namespaces/gadget; range_end:; response_count:1; response_revision:427; }","duration":"135.551875ms","start":"2023-11-14T13:35:32.705472Z","end":"2023-11-14T13:35:32.841024Z","steps":["trace[196740629] 'agreement among raft nodes before linearized reading'  (duration: 121.19145ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-14T13:35:32.841294Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.472507ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2023-11-14T13:35:32.841326Z","caller":"traceutil/trace.go:171","msg":"trace[949784910] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:427; }","duration":"135.515108ms","start":"2023-11-14T13:35:32.705803Z","end":"2023-11-14T13:35:32.841318Z","steps":["trace[949784910] 'agreement among raft nodes before linearized reading'  (duration: 120.846811ms)","trace[949784910] 'range keys from in-memory index tree'  (duration: 14.596018ms)"],"step_count":2}
	
	* 
	* ==> gcp-auth [30b9f29b8e6be0cec57e17398830486dcdab5209c4d74bbec8b1bf3bacfec888] <==
	* 2023/11/14 13:37:07 GCP Auth Webhook started!
	2023/11/14 13:37:21 Ready to marshal response ...
	2023/11/14 13:37:21 Ready to write response ...
	2023/11/14 13:37:21 Ready to marshal response ...
	2023/11/14 13:37:21 Ready to write response ...
	2023/11/14 13:37:24 Ready to marshal response ...
	2023/11/14 13:37:24 Ready to write response ...
	2023/11/14 13:37:30 Ready to marshal response ...
	2023/11/14 13:37:30 Ready to write response ...
	2023/11/14 13:37:36 Ready to marshal response ...
	2023/11/14 13:37:36 Ready to write response ...
	2023/11/14 13:37:36 Ready to marshal response ...
	2023/11/14 13:37:36 Ready to write response ...
	2023/11/14 13:37:36 Ready to marshal response ...
	2023/11/14 13:37:36 Ready to write response ...
	2023/11/14 13:37:49 Ready to marshal response ...
	2023/11/14 13:37:49 Ready to write response ...
	2023/11/14 13:38:04 Ready to marshal response ...
	2023/11/14 13:38:04 Ready to write response ...
	2023/11/14 13:38:26 Ready to marshal response ...
	2023/11/14 13:38:26 Ready to write response ...
	2023/11/14 13:40:46 Ready to marshal response ...
	2023/11/14 13:40:46 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  13:41:12 up 10:23,  0 users,  load average: 0.59, 1.12, 1.44
	Linux addons-008546 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [66efff6f8d0bd5cfd153115d9fafe975ab897a01f1e5cd24bb460f0250771e61] <==
	* I1114 13:39:10.418463       1 main.go:227] handling current node
	I1114 13:39:20.425617       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:39:20.425651       1 main.go:227] handling current node
	I1114 13:39:30.430273       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:39:30.430305       1 main.go:227] handling current node
	I1114 13:39:40.442371       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:39:40.442402       1 main.go:227] handling current node
	I1114 13:39:50.455110       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:39:50.455141       1 main.go:227] handling current node
	I1114 13:40:00.459261       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:40:00.459292       1 main.go:227] handling current node
	I1114 13:40:10.463703       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:40:10.463732       1 main.go:227] handling current node
	I1114 13:40:20.476363       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:40:20.476393       1 main.go:227] handling current node
	I1114 13:40:30.480938       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:40:30.480965       1 main.go:227] handling current node
	I1114 13:40:40.491048       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:40:40.491090       1 main.go:227] handling current node
	I1114 13:40:50.503883       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:40:50.503913       1 main.go:227] handling current node
	I1114 13:41:00.514998       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:41:00.515026       1 main.go:227] handling current node
	I1114 13:41:10.519016       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:41:10.519048       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [6ebe905ae59120bae7de5df4404a3131da47de00354ebd103b62075779dce7c8] <==
	* I1114 13:38:19.509823       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1114 13:38:20.531945       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1114 13:38:22.662837       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 13:38:22.662984       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1114 13:38:22.675417       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 13:38:22.675476       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1114 13:38:22.694229       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 13:38:22.694379       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1114 13:38:22.702159       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 13:38:22.702509       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1114 13:38:22.712865       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 13:38:22.713878       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1114 13:38:22.716700       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 13:38:22.716846       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1114 13:38:22.795235       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 13:38:22.795292       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1114 13:38:22.815548       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1114 13:38:22.816284       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1114 13:38:23.702543       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1114 13:38:23.816006       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1114 13:38:23.834366       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1114 13:38:25.669598       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1114 13:38:26.124457       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.218.248"}
	I1114 13:39:15.434938       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1114 13:40:46.349165       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.166.54"}
	
	* 
	* ==> kube-controller-manager [7a0cbe31f93957ca3d352d17a6ba576be3843b49d30f569db140d60e3e95c665] <==
	* E1114 13:40:12.802960       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1114 13:40:22.069397       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1114 13:40:22.069432       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1114 13:40:31.148028       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1114 13:40:31.148060       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1114 13:40:43.337240       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1114 13:40:43.337274       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1114 13:40:46.073631       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1114 13:40:46.121185       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-dfpws"
	I1114 13:40:46.132166       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="60.384206ms"
	I1114 13:40:46.168917       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="36.640684ms"
	I1114 13:40:46.169018       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="56.336µs"
	W1114 13:40:48.044102       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1114 13:40:48.044141       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1114 13:40:49.142961       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="55.868µs"
	I1114 13:40:50.145795       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="42.215µs"
	I1114 13:40:51.142735       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="71.679µs"
	I1114 13:41:03.865441       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1114 13:41:03.870411       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="7.926µs"
	I1114 13:41:03.872317       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I1114 13:41:05.180365       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="83.815µs"
	W1114 13:41:07.848583       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1114 13:41:07.848619       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1114 13:41:11.572675       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1114 13:41:11.572786       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [677985142b341b5a926aaff8c7a023e4bf25feb396feb66ad8e3c7e9023c0fc9] <==
	* I1114 13:35:33.532169       1 server_others.go:69] "Using iptables proxy"
	I1114 13:35:33.643790       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1114 13:35:33.832442       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1114 13:35:33.853108       1 server_others.go:152] "Using iptables Proxier"
	I1114 13:35:33.853218       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1114 13:35:33.853249       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1114 13:35:33.853350       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1114 13:35:33.853619       1 server.go:846] "Version info" version="v1.28.3"
	I1114 13:35:33.853881       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 13:35:33.854698       1 config.go:188] "Starting service config controller"
	I1114 13:35:33.855180       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1114 13:35:33.855253       1 config.go:97] "Starting endpoint slice config controller"
	I1114 13:35:33.855283       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1114 13:35:33.855816       1 config.go:315] "Starting node config controller"
	I1114 13:35:33.855867       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1114 13:35:33.955965       1 shared_informer.go:318] Caches are synced for service config
	I1114 13:35:33.956064       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1114 13:35:33.962882       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2a192f1945960f0a3b4988c843f899fcf16d4396b70f87b18fe8a9cac10424f5] <==
	* W1114 13:35:11.923714       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1114 13:35:11.923731       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1114 13:35:11.923769       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1114 13:35:11.923783       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1114 13:35:11.923898       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1114 13:35:11.923913       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1114 13:35:11.923950       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 13:35:11.923963       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1114 13:35:11.923996       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1114 13:35:11.924009       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1114 13:35:11.924146       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1114 13:35:11.924165       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1114 13:35:11.927339       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1114 13:35:11.927373       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1114 13:35:12.800835       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1114 13:35:12.800967       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1114 13:35:12.814609       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1114 13:35:12.814645       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1114 13:35:12.842729       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1114 13:35:12.842764       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1114 13:35:12.845373       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1114 13:35:12.845479       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1114 13:35:13.025687       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1114 13:35:13.026022       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1114 13:35:15.196348       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 14 13:41:01 addons-008546 kubelet[1356]: E1114 13:41:01.282799    1356 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/f91d97d1cde5451468fcedae254215ead1b3249de320cd35cca6152e945e5589/diff" to get inode usage: stat /var/lib/containers/storage/overlay/f91d97d1cde5451468fcedae254215ead1b3249de320cd35cca6152e945e5589/diff: no such file or directory, extraDiskErr: <nil>
	Nov 14 13:41:01 addons-008546 kubelet[1356]: E1114 13:41:01.438048    1356 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/d8b76dded8acd02c79e5d9792886b169eac43fd0d0da33fa33b295b809d6cae6/diff" to get inode usage: stat /var/lib/containers/storage/overlay/d8b76dded8acd02c79e5d9792886b169eac43fd0d0da33fa33b295b809d6cae6/diff: no such file or directory, extraDiskErr: <nil>
	Nov 14 13:41:01 addons-008546 kubelet[1356]: E1114 13:41:01.724081    1356 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/cad72a83642d5f16c412084fa106edb3730d841dd17faa86181c7b109f0209fd/diff" to get inode usage: stat /var/lib/containers/storage/overlay/cad72a83642d5f16c412084fa106edb3730d841dd17faa86181c7b109f0209fd/diff: no such file or directory, extraDiskErr: <nil>
	Nov 14 13:41:02 addons-008546 kubelet[1356]: I1114 13:41:02.282181    1356 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvtgn\" (UniqueName: \"kubernetes.io/projected/ea9a3530-9aab-4914-8a5c-0753a2ee56f8-kube-api-access-pvtgn\") pod \"ea9a3530-9aab-4914-8a5c-0753a2ee56f8\" (UID: \"ea9a3530-9aab-4914-8a5c-0753a2ee56f8\") "
	Nov 14 13:41:02 addons-008546 kubelet[1356]: I1114 13:41:02.291279    1356 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea9a3530-9aab-4914-8a5c-0753a2ee56f8-kube-api-access-pvtgn" (OuterVolumeSpecName: "kube-api-access-pvtgn") pod "ea9a3530-9aab-4914-8a5c-0753a2ee56f8" (UID: "ea9a3530-9aab-4914-8a5c-0753a2ee56f8"). InnerVolumeSpecName "kube-api-access-pvtgn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 14 13:41:02 addons-008546 kubelet[1356]: I1114 13:41:02.382664    1356 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pvtgn\" (UniqueName: \"kubernetes.io/projected/ea9a3530-9aab-4914-8a5c-0753a2ee56f8-kube-api-access-pvtgn\") on node \"addons-008546\" DevicePath \"\""
	Nov 14 13:41:03 addons-008546 kubelet[1356]: I1114 13:41:03.156959    1356 scope.go:117] "RemoveContainer" containerID="9a8f8257ab13880e64aa296f4d92a22447e7c6f78e43ad4b872160c654c905b2"
	Nov 14 13:41:04 addons-008546 kubelet[1356]: I1114 13:41:04.864591    1356 scope.go:117] "RemoveContainer" containerID="fa3368a11bf062b197e80b4d94d9a5f18241c9be5fcdbecb41d1adeee63f6c92"
	Nov 14 13:41:04 addons-008546 kubelet[1356]: I1114 13:41:04.866827    1356 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4bff14cd-070e-4ffb-8531-315faf77f51d" path="/var/lib/kubelet/pods/4bff14cd-070e-4ffb-8531-315faf77f51d/volumes"
	Nov 14 13:41:04 addons-008546 kubelet[1356]: I1114 13:41:04.867267    1356 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ea9a3530-9aab-4914-8a5c-0753a2ee56f8" path="/var/lib/kubelet/pods/ea9a3530-9aab-4914-8a5c-0753a2ee56f8/volumes"
	Nov 14 13:41:04 addons-008546 kubelet[1356]: I1114 13:41:04.867664    1356 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ed409e4f-7d99-4ff0-ab64-c809dad21cba" path="/var/lib/kubelet/pods/ed409e4f-7d99-4ff0-ab64-c809dad21cba/volumes"
	Nov 14 13:41:05 addons-008546 kubelet[1356]: I1114 13:41:05.163459    1356 scope.go:117] "RemoveContainer" containerID="fa3368a11bf062b197e80b4d94d9a5f18241c9be5fcdbecb41d1adeee63f6c92"
	Nov 14 13:41:05 addons-008546 kubelet[1356]: I1114 13:41:05.163676    1356 scope.go:117] "RemoveContainer" containerID="9e734f731597a6e3145f9a8007bedeb6bbbe7048bdfff652eb1791604ccae9f9"
	Nov 14 13:41:05 addons-008546 kubelet[1356]: E1114 13:41:05.163942    1356 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"hello-world-app\" with CrashLoopBackOff: \"back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5d77478584-dfpws_default(d6bb9c65-48bd-4bd1-b1bc-f6719dc17406)\"" pod="default/hello-world-app-5d77478584-dfpws" podUID="d6bb9c65-48bd-4bd1-b1bc-f6719dc17406"
	Nov 14 13:41:07 addons-008546 kubelet[1356]: I1114 13:41:07.170956    1356 scope.go:117] "RemoveContainer" containerID="50fd7cfa56fe9cb6b59e2ede48c06e9140d8567a8220716cf67bd482132f215c"
	Nov 14 13:41:07 addons-008546 kubelet[1356]: I1114 13:41:07.191915    1356 scope.go:117] "RemoveContainer" containerID="50fd7cfa56fe9cb6b59e2ede48c06e9140d8567a8220716cf67bd482132f215c"
	Nov 14 13:41:07 addons-008546 kubelet[1356]: E1114 13:41:07.192412    1356 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50fd7cfa56fe9cb6b59e2ede48c06e9140d8567a8220716cf67bd482132f215c\": container with ID starting with 50fd7cfa56fe9cb6b59e2ede48c06e9140d8567a8220716cf67bd482132f215c not found: ID does not exist" containerID="50fd7cfa56fe9cb6b59e2ede48c06e9140d8567a8220716cf67bd482132f215c"
	Nov 14 13:41:07 addons-008546 kubelet[1356]: I1114 13:41:07.192465    1356 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50fd7cfa56fe9cb6b59e2ede48c06e9140d8567a8220716cf67bd482132f215c"} err="failed to get container status \"50fd7cfa56fe9cb6b59e2ede48c06e9140d8567a8220716cf67bd482132f215c\": rpc error: code = NotFound desc = could not find container \"50fd7cfa56fe9cb6b59e2ede48c06e9140d8567a8220716cf67bd482132f215c\": container with ID starting with 50fd7cfa56fe9cb6b59e2ede48c06e9140d8567a8220716cf67bd482132f215c not found: ID does not exist"
	Nov 14 13:41:07 addons-008546 kubelet[1356]: I1114 13:41:07.317362    1356 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjq9t\" (UniqueName: \"kubernetes.io/projected/912af1c6-6c73-4d5f-8051-fe7e772ed908-kube-api-access-bjq9t\") pod \"912af1c6-6c73-4d5f-8051-fe7e772ed908\" (UID: \"912af1c6-6c73-4d5f-8051-fe7e772ed908\") "
	Nov 14 13:41:07 addons-008546 kubelet[1356]: I1114 13:41:07.317427    1356 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/912af1c6-6c73-4d5f-8051-fe7e772ed908-webhook-cert\") pod \"912af1c6-6c73-4d5f-8051-fe7e772ed908\" (UID: \"912af1c6-6c73-4d5f-8051-fe7e772ed908\") "
	Nov 14 13:41:07 addons-008546 kubelet[1356]: I1114 13:41:07.320431    1356 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/912af1c6-6c73-4d5f-8051-fe7e772ed908-kube-api-access-bjq9t" (OuterVolumeSpecName: "kube-api-access-bjq9t") pod "912af1c6-6c73-4d5f-8051-fe7e772ed908" (UID: "912af1c6-6c73-4d5f-8051-fe7e772ed908"). InnerVolumeSpecName "kube-api-access-bjq9t". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 14 13:41:07 addons-008546 kubelet[1356]: I1114 13:41:07.320667    1356 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/912af1c6-6c73-4d5f-8051-fe7e772ed908-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "912af1c6-6c73-4d5f-8051-fe7e772ed908" (UID: "912af1c6-6c73-4d5f-8051-fe7e772ed908"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 14 13:41:07 addons-008546 kubelet[1356]: I1114 13:41:07.417853    1356 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bjq9t\" (UniqueName: \"kubernetes.io/projected/912af1c6-6c73-4d5f-8051-fe7e772ed908-kube-api-access-bjq9t\") on node \"addons-008546\" DevicePath \"\""
	Nov 14 13:41:07 addons-008546 kubelet[1356]: I1114 13:41:07.417894    1356 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/912af1c6-6c73-4d5f-8051-fe7e772ed908-webhook-cert\") on node \"addons-008546\" DevicePath \"\""
	Nov 14 13:41:08 addons-008546 kubelet[1356]: I1114 13:41:08.865389    1356 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="912af1c6-6c73-4d5f-8051-fe7e772ed908" path="/var/lib/kubelet/pods/912af1c6-6c73-4d5f-8051-fe7e772ed908/volumes"
	
	* 
	* ==> storage-provisioner [49b92ff93c0a90c993ecaf14dab1af85b59bb3f068ed5c50ab5599cb606f473d] <==
	* I1114 13:36:01.790647       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1114 13:36:01.815695       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1114 13:36:01.815917       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1114 13:36:01.824911       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1114 13:36:01.825069       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"de46657f-3e27-4fbe-a62d-6782c370d0d7", APIVersion:"v1", ResourceVersion:"852", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-008546_9a1eec4c-8a3e-4ba7-bd3b-6bccd089b783 became leader
	I1114 13:36:01.827113       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-008546_9a1eec4c-8a3e-4ba7-bd3b-6bccd089b783!
	I1114 13:36:01.928082       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-008546_9a1eec4c-8a3e-4ba7-bd3b-6bccd089b783!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-008546 -n addons-008546
helpers_test.go:261: (dbg) Run:  kubectl --context addons-008546 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (168.99s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (189.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [c31c1247-3301-483e-9f8c-2f9099824636] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.034450981s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-943397 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-943397 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-943397 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-943397 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7e9ee6a6-812b-44d9-b39f-38654793eb3b] Pending
helpers_test.go:344: "sp-pod" [7e9ee6a6-812b-44d9-b39f-38654793eb3b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-943397 -n functional-943397
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2023-11-14 13:48:27.008919003 +0000 UTC m=+872.126569984
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-943397 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-943397 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-943397/192.168.49.2
Start Time:       Tue, 14 Nov 2023 13:45:26 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sxb7d (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-sxb7d:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  3m1s                 default-scheduler  Successfully assigned default/sp-pod to functional-943397
Warning  Failed     73s (x2 over 2m30s)  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:565211f0ec2c97f4118c0c1b6be7f1c7775c0b3d44c2bb72bd32983a5696aa6a in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    45s (x3 over 3m1s)   kubelet            Pulling image "docker.io/nginx"
Warning  Failed     15s (x3 over 2m30s)  kubelet            Error: ErrImagePull
Warning  Failed     15s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    4s (x3 over 2m29s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     4s (x3 over 2m29s)   kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-943397 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-943397 logs sp-pod -n default: exit status 1 (94.945509ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-943397 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-943397
helpers_test.go:235: (dbg) docker inspect functional-943397:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "082437487ccd2fdfc24f80ec82e391ba4c84d4621b33da68135960fff7a04592",
	        "Created": "2023-11-14T13:42:39.111750304Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1207907,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-14T13:42:39.453196102Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:977f9df3a3e2dccc16de7b5e8115e5e1294a98b99d56135cce7538135e7a7a9d",
	        "ResolvConfPath": "/var/lib/docker/containers/082437487ccd2fdfc24f80ec82e391ba4c84d4621b33da68135960fff7a04592/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/082437487ccd2fdfc24f80ec82e391ba4c84d4621b33da68135960fff7a04592/hostname",
	        "HostsPath": "/var/lib/docker/containers/082437487ccd2fdfc24f80ec82e391ba4c84d4621b33da68135960fff7a04592/hosts",
	        "LogPath": "/var/lib/docker/containers/082437487ccd2fdfc24f80ec82e391ba4c84d4621b33da68135960fff7a04592/082437487ccd2fdfc24f80ec82e391ba4c84d4621b33da68135960fff7a04592-json.log",
	        "Name": "/functional-943397",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-943397:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-943397",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/48c927e8aa0ae441cfdb46f76587a4e96fe535656801af7469dd65892738ab3e-init/diff:/var/lib/docker/overlay2/ad9b1528ccc99a2a23c8205d781cfd6ce01aa0662a87aad99178910b13bfc77f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/48c927e8aa0ae441cfdb46f76587a4e96fe535656801af7469dd65892738ab3e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/48c927e8aa0ae441cfdb46f76587a4e96fe535656801af7469dd65892738ab3e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/48c927e8aa0ae441cfdb46f76587a4e96fe535656801af7469dd65892738ab3e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-943397",
	                "Source": "/var/lib/docker/volumes/functional-943397/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-943397",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-943397",
	                "name.minikube.sigs.k8s.io": "functional-943397",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4d2836dc8a843ca9f8def486cc6e686efc3d93c68122412479845a208d69eb40",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34289"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34288"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34285"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34287"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34286"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/4d2836dc8a84",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-943397": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "082437487ccd",
	                        "functional-943397"
	                    ],
	                    "NetworkID": "8002290be8c1b9f1efa437b9b92332de86d1bd8c2c249e1e7f2ed0ca34c598c1",
	                    "EndpointID": "3d3893ec3471ba59257736384960b2197c7bbf4995b95d62b8f4ec69e42bbc8c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-943397 -n functional-943397
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-943397 logs -n 25: (1.958321385s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-943397 image load --daemon                                  | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:47 UTC | 14 Nov 23 13:47 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-943397               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image          | functional-943397 image ls                                             | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:47 UTC | 14 Nov 23 13:47 UTC |
	| image          | functional-943397 image save                                           | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:47 UTC | 14 Nov 23 13:47 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-943397               |                   |         |         |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image          | functional-943397 image rm                                             | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:47 UTC | 14 Nov 23 13:47 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-943397               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image          | functional-943397 image ls                                             | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:47 UTC | 14 Nov 23 13:47 UTC |
	| image          | functional-943397 image load                                           | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:47 UTC | 14 Nov 23 13:47 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image          | functional-943397 image ls                                             | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:47 UTC | 14 Nov 23 13:47 UTC |
	| image          | functional-943397 image save --daemon                                  | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:47 UTC | 14 Nov 23 13:48 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-943397               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /etc/test/nested/copy/1191690/hosts                                    |                   |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /etc/ssl/certs/1191690.pem                                             |                   |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /usr/share/ca-certificates/1191690.pem                                 |                   |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /etc/ssl/certs/51391683.0                                              |                   |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /etc/ssl/certs/11916902.pem                                            |                   |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /usr/share/ca-certificates/11916902.pem                                |                   |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                              |                   |         |         |                     |                     |
	| image          | functional-943397                                                      | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | image ls --format short                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image          | functional-943397                                                      | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | image ls --format yaml                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| ssh            | functional-943397 ssh pgrep                                            | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC |                     |
	|                | buildkitd                                                              |                   |         |         |                     |                     |
	| image          | functional-943397 image build -t                                       | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | localhost/my-image:functional-943397                                   |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                   |         |         |                     |                     |
	| image          | functional-943397 image ls                                             | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	| image          | functional-943397                                                      | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | image ls --format json                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image          | functional-943397                                                      | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | image ls --format table                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| update-context | functional-943397                                                      | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	| update-context | functional-943397                                                      | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	| update-context | functional-943397                                                      | functional-943397 | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 13:47:25
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 13:47:25.712656 1217626 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:47:25.712980 1217626 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:47:25.713010 1217626 out.go:309] Setting ErrFile to fd 2...
	I1114 13:47:25.713031 1217626 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:47:25.713344 1217626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
	I1114 13:47:25.713776 1217626 out.go:303] Setting JSON to false
	I1114 13:47:25.714816 1217626 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":37792,"bootTime":1699931854,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1114 13:47:25.714929 1217626 start.go:138] virtualization:  
	I1114 13:47:25.717401 1217626 out.go:177] * [functional-943397] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1114 13:47:25.720253 1217626 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 13:47:25.723243 1217626 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 13:47:25.720412 1217626 notify.go:220] Checking for updates...
	I1114 13:47:25.725248 1217626 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 13:47:25.726976 1217626 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	I1114 13:47:25.728671 1217626 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1114 13:47:25.730284 1217626 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 13:47:25.732386 1217626 config.go:182] Loaded profile config "functional-943397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 13:47:25.733187 1217626 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 13:47:25.757840 1217626 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1114 13:47:25.758021 1217626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:47:25.851538 1217626 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-14 13:47:25.841479987 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:47:25.851702 1217626 docker.go:295] overlay module found
	I1114 13:47:25.853657 1217626 out.go:177] * Using the docker driver based on existing profile
	I1114 13:47:25.855519 1217626 start.go:298] selected driver: docker
	I1114 13:47:25.855539 1217626 start.go:902] validating driver "docker" against &{Name:functional-943397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-943397 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:47:25.855655 1217626 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 13:47:25.855797 1217626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:47:25.928724 1217626 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-14 13:47:25.917173949 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:47:25.929175 1217626 cni.go:84] Creating CNI manager for ""
	I1114 13:47:25.929194 1217626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1114 13:47:25.929205 1217626 start_flags.go:323] config:
	{Name:functional-943397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-943397 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:47:25.932434 1217626 out.go:177] * dry-run validation complete!
	
	* 
	* ==> CRI-O <==
	* Nov 14 13:47:29 functional-943397 crio[4456]: time="2023-11-14 13:47:29.383199581Z" level=info msg="Created container 012d5f6ddf608c66566f04a0ce2fd593f5492613817f168de86678bcec60f7fe: kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-t7x6c/dashboard-metrics-scraper" id=067c7293-b1f7-49d6-b30a-a6b58085c13e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 14 13:47:29 functional-943397 crio[4456]: time="2023-11-14 13:47:29.383982748Z" level=info msg="Starting container: 012d5f6ddf608c66566f04a0ce2fd593f5492613817f168de86678bcec60f7fe" id=b8582f05-5945-4e55-8731-ef748fa9d439 name=/runtime.v1.RuntimeService/StartContainer
	Nov 14 13:47:29 functional-943397 crio[4456]: time="2023-11-14 13:47:29.396074091Z" level=info msg="Started container" PID=6681 containerID=012d5f6ddf608c66566f04a0ce2fd593f5492613817f168de86678bcec60f7fe description=kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-t7x6c/dashboard-metrics-scraper id=b8582f05-5945-4e55-8731-ef748fa9d439 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b7d71b892dad108519b455a835af841aad0f32ae8e3b923b33c052ff4c0bcc5a
	Nov 14 13:47:29 functional-943397 crio[4456]: time="2023-11-14 13:47:29.559949696Z" level=info msg="Trying to access \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Nov 14 13:47:33 functional-943397 crio[4456]: time="2023-11-14 13:47:33.760742751Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=ca348679-8186-41e1-bb2c-eee2adb0f6d2 name=/runtime.v1.ImageService/PullImage
	Nov 14 13:47:33 functional-943397 crio[4456]: time="2023-11-14 13:47:33.762022357Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=dd1dc754-53d2-42be-ab5b-e2f167233411 name=/runtime.v1.ImageService/ImageStatus
	Nov 14 13:47:33 functional-943397 crio[4456]: time="2023-11-14 13:47:33.762993813Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8,RepoTags:[],RepoDigests:[docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf],Size_:247562353,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=dd1dc754-53d2-42be-ab5b-e2f167233411 name=/runtime.v1.ImageService/ImageStatus
	Nov 14 13:47:33 functional-943397 crio[4456]: time="2023-11-14 13:47:33.764358407Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-sw6nw/kubernetes-dashboard" id=55d406fd-bc06-4e5b-a1da-7aa2bae220af name=/runtime.v1.RuntimeService/CreateContainer
	Nov 14 13:47:33 functional-943397 crio[4456]: time="2023-11-14 13:47:33.764457172Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 14 13:47:33 functional-943397 crio[4456]: time="2023-11-14 13:47:33.780394251Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1aee109d8f87c888590aefa4ccc1225c291dcb0b9c718a71d4e6d380fd2036b2/merged/etc/group: no such file or directory"
	Nov 14 13:47:33 functional-943397 crio[4456]: time="2023-11-14 13:47:33.849089028Z" level=info msg="Created container 4c07326ce3a369ce6016e6e487bf2e76f83d24f39208d5244d9ea7e8a4b09309: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-sw6nw/kubernetes-dashboard" id=55d406fd-bc06-4e5b-a1da-7aa2bae220af name=/runtime.v1.RuntimeService/CreateContainer
	Nov 14 13:47:33 functional-943397 crio[4456]: time="2023-11-14 13:47:33.850059894Z" level=info msg="Starting container: 4c07326ce3a369ce6016e6e487bf2e76f83d24f39208d5244d9ea7e8a4b09309" id=53fcf353-5ae8-446a-8fd2-acafd136421d name=/runtime.v1.RuntimeService/StartContainer
	Nov 14 13:47:33 functional-943397 crio[4456]: time="2023-11-14 13:47:33.865089335Z" level=info msg="Started container" PID=6735 containerID=4c07326ce3a369ce6016e6e487bf2e76f83d24f39208d5244d9ea7e8a4b09309 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-sw6nw/kubernetes-dashboard id=53fcf353-5ae8-446a-8fd2-acafd136421d name=/runtime.v1.RuntimeService/StartContainer sandboxID=cb5eb50f81fb44444ca05ac31996b3f1ae6dcd593db386a77db471f36b965715
	Nov 14 13:47:42 functional-943397 crio[4456]: time="2023-11-14 13:47:42.095616454Z" level=info msg="Checking image status: docker.io/nginx:latest" id=96603eca-0b12-4d01-b256-35c255905d78 name=/runtime.v1.ImageService/ImageStatus
	Nov 14 13:47:42 functional-943397 crio[4456]: time="2023-11-14 13:47:42.095924981Z" level=info msg="Image docker.io/nginx:latest not found" id=96603eca-0b12-4d01-b256-35c255905d78 name=/runtime.v1.ImageService/ImageStatus
	Nov 14 13:47:42 functional-943397 crio[4456]: time="2023-11-14 13:47:42.097314289Z" level=info msg="Pulling image: docker.io/nginx:latest" id=51e43dd4-afcf-4ea1-872d-f01234e5985a name=/runtime.v1.ImageService/PullImage
	Nov 14 13:47:42 functional-943397 crio[4456]: time="2023-11-14 13:47:42.101159278Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Nov 14 13:47:46 functional-943397 crio[4456]: time="2023-11-14 13:47:46.689552128Z" level=info msg="Checking image status: gcr.io/google-containers/addon-resizer:functional-943397" id=e651e938-6bc8-4e69-bc19-0e449577f2e6 name=/runtime.v1.ImageService/ImageStatus
	Nov 14 13:47:46 functional-943397 crio[4456]: time="2023-11-14 13:47:46.689797772Z" level=info msg="Image gcr.io/google-containers/addon-resizer:functional-943397 not found" id=e651e938-6bc8-4e69-bc19-0e449577f2e6 name=/runtime.v1.ImageService/ImageStatus
	Nov 14 13:47:55 functional-943397 crio[4456]: time="2023-11-14 13:47:55.616846118Z" level=info msg="Checking image status: gcr.io/google-containers/addon-resizer:functional-943397" id=92a61e7c-1975-40d1-a03f-c9117a52b4d6 name=/runtime.v1.ImageService/ImageStatus
	Nov 14 13:47:55 functional-943397 crio[4456]: time="2023-11-14 13:47:55.617075113Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:b08046378d77c9dfdab5fbe738244949bc9d487d7b394813b7209ff1f43b82cd,RepoTags:[gcr.io/google-containers/addon-resizer:functional-943397],RepoDigests:[gcr.io/google-containers/addon-resizer@sha256:2a8d4b63cfef57ff8da6bfa7a54875094128c3477d8ebde545a5f4e2465e35b3],Size_:40216491,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=92a61e7c-1975-40d1-a03f-c9117a52b4d6 name=/runtime.v1.ImageService/ImageStatus
	Nov 14 13:47:58 functional-943397 crio[4456]: time="2023-11-14 13:47:58.195239339Z" level=info msg="Checking image status: gcr.io/google-containers/addon-resizer:functional-943397" id=9471cbbd-630b-4087-87a9-1e8cc3103975 name=/runtime.v1.ImageService/ImageStatus
	Nov 14 13:47:58 functional-943397 crio[4456]: time="2023-11-14 13:47:58.195503944Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91,RepoTags:[gcr.io/google-containers/addon-resizer:functional-943397],RepoDigests:[gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126],Size_:34114467,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=9471cbbd-630b-4087-87a9-1e8cc3103975 name=/runtime.v1.ImageService/ImageStatus
	Nov 14 13:48:23 functional-943397 crio[4456]: time="2023-11-14 13:48:23.095753608Z" level=info msg="Checking image status: docker.io/nginx:latest" id=8bd32d30-c126-4f2f-9499-8419ceab0c70 name=/runtime.v1.ImageService/ImageStatus
	Nov 14 13:48:23 functional-943397 crio[4456]: time="2023-11-14 13:48:23.095982809Z" level=info msg="Image docker.io/nginx:latest not found" id=8bd32d30-c126-4f2f-9499-8419ceab0c70 name=/runtime.v1.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	4c07326ce3a36       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         54 seconds ago       Running             kubernetes-dashboard        0                   cb5eb50f81fb4       kubernetes-dashboard-8694d4445c-sw6nw
	012d5f6ddf608       docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   59 seconds ago       Running             dashboard-metrics-scraper   0                   b7d71b892dad1       dashboard-metrics-scraper-7fd5cb4ddc-t7x6c
	dfbd81b97fd52       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              About a minute ago   Exited              mount-munger                0                   5d9e2fa32e5a6       busybox-mount
	23029a9715b29       72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb                                                 2 minutes ago        Running             echoserver-arm              0                   42bafc12637a6       hello-node-759d89bdcc-c4qs9
	1f7f29b2f9e0c       registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5           2 minutes ago        Running             echoserver-arm              0                   bccaa83ddd151       hello-node-connect-7799dfb7c6-q7v82
	3552a9d35aadf       docker.io/library/nginx@sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b                  3 minutes ago        Running             nginx                       0                   477aeefdce581       nginx-svc
	4865aba04b69a       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                 3 minutes ago        Running             coredns                     3                   90b4330674ee6       coredns-5dd5756b68-4bchq
	f9b17a2b8161a       a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd                                                 3 minutes ago        Running             kube-proxy                  3                   0406d552ce979       kube-proxy-7cghk
	6236d11e1a46a       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                 3 minutes ago        Running             kindnet-cni                 3                   db9500d2add01       kindnet-5nm4t
	65df096e597f8       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 3 minutes ago        Running             storage-provisioner         3                   59f2cbb98463b       storage-provisioner
	1b2f7514d7de3       537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7                                                 3 minutes ago        Running             kube-apiserver              0                   15c4737a72932       kube-apiserver-functional-943397
	37cd6bbdf7772       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                 3 minutes ago        Running             etcd                        3                   7aec07aff4244       etcd-functional-943397
	16c05911025d8       8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16                                                 3 minutes ago        Running             kube-controller-manager     3                   b14524ae8978c       kube-controller-manager-functional-943397
	b6add8b6110f1       42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314                                                 3 minutes ago        Running             kube-scheduler              3                   6f8de951c5e5b       kube-scheduler-functional-943397
	98bbe4c66999c       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                                 4 minutes ago        Exited              coredns                     2                   90b4330674ee6       coredns-5dd5756b68-4bchq
	d0c21413f7e88       42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314                                                 4 minutes ago        Exited              kube-scheduler              2                   6f8de951c5e5b       kube-scheduler-functional-943397
	5018b414d449d       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                                 4 minutes ago        Exited              kindnet-cni                 2                   db9500d2add01       kindnet-5nm4t
	1ba42ff1d58d1       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                                 4 minutes ago        Exited              etcd                        2                   7aec07aff4244       etcd-functional-943397
	1343a61f72a71       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 4 minutes ago        Exited              storage-provisioner         2                   59f2cbb98463b       storage-provisioner
	3af6ca201fe65       8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16                                                 4 minutes ago        Exited              kube-controller-manager     2                   b14524ae8978c       kube-controller-manager-functional-943397
	7e8b163818658       a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd                                                 4 minutes ago        Exited              kube-proxy                  2                   0406d552ce979       kube-proxy-7cghk
	
	* 
	* ==> coredns [4865aba04b69a5f189dd1e745b0f4df4c3b37a46463fa5af4289b7636e63178c] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52704 - 4754 "HINFO IN 4968418693262788208.564135439067160802. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012924513s
	
	* 
	* ==> coredns [98bbe4c66999c8ebe3e8d6cb6cec8836058d92a645dddd4835d304dbeca2f433] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59653 - 23894 "HINFO IN 1329069051231595613.591129070516178181. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.023015569s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-943397
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-943397
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d8573efb5a7770e21024de23a29d810b200278b
	                    minikube.k8s.io/name=functional-943397
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T13_43_02_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 13:42:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-943397
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 13:48:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 13:47:58 +0000   Tue, 14 Nov 2023 13:42:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 13:47:58 +0000   Tue, 14 Nov 2023 13:42:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 13:47:58 +0000   Tue, 14 Nov 2023 13:42:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 13:47:58 +0000   Tue, 14 Nov 2023 13:43:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-943397
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 927dacde045e49f1a632631410011460
	  System UUID:                5814af1c-fc1e-454d-b352-7052f16c7285
	  Boot ID:                    3bdb9c53-2d63-44b9-be60-6ff1ad471e35
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-c4qs9                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m21s
	  default                     hello-node-connect-7799dfb7c6-q7v82           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  default                     nginx-svc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m2s
	  kube-system                 coredns-5dd5756b68-4bchq                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     5m14s
	  kube-system                 etcd-functional-943397                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m26s
	  kube-system                 kindnet-5nm4t                                 100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      5m14s
	  kube-system                 kube-apiserver-functional-943397              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	  kube-system                 kube-controller-manager-functional-943397     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m27s
	  kube-system                 kube-proxy-7cghk                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-scheduler-functional-943397              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m28s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	  kubernetes-dashboard        dashboard-metrics-scraper-7fd5cb4ddc-t7x6c    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-sw6nw         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m12s                  kube-proxy       
	  Normal   Starting                 3m31s                  kube-proxy       
	  Normal   Starting                 4m13s                  kube-proxy       
	  Normal   Starting                 5m35s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m35s (x8 over 5m35s)  kubelet          Node functional-943397 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m35s (x8 over 5m35s)  kubelet          Node functional-943397 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m35s (x8 over 5m35s)  kubelet          Node functional-943397 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     5m27s                  kubelet          Node functional-943397 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  5m27s                  kubelet          Node functional-943397 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m27s                  kubelet          Node functional-943397 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 5m27s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           5m14s                  node-controller  Node functional-943397 event: Registered Node functional-943397 in Controller
	  Normal   NodeReady                4m42s                  kubelet          Node functional-943397 status is now: NodeReady
	  Warning  ContainerGCFailed        4m27s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m1s                   node-controller  Node functional-943397 event: Registered Node functional-943397 in Controller
	  Normal   Starting                 3m38s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m38s (x8 over 3m38s)  kubelet          Node functional-943397 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m38s (x8 over 3m38s)  kubelet          Node functional-943397 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m38s (x8 over 3m38s)  kubelet          Node functional-943397 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m20s                  node-controller  Node functional-943397 event: Registered Node functional-943397 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.001143] FS-Cache: O-key=[8] '84643b0000000000'
	[  +0.000762] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000999] FS-Cache: N-cookie d=00000000fbc4fe34{9p.inode} n=000000002244812d
	[  +0.001146] FS-Cache: N-key=[8] '84643b0000000000'
	[  +0.003454] FS-Cache: Duplicate cookie detected
	[  +0.000756] FS-Cache: O-cookie c=0000005f [p=0000005d fl=226 nc=0 na=1]
	[  +0.001057] FS-Cache: O-cookie d=00000000fbc4fe34{9p.inode} n=00000000ecb0ec67
	[  +0.001110] FS-Cache: O-key=[8] '84643b0000000000'
	[  +0.000749] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001022] FS-Cache: N-cookie d=00000000fbc4fe34{9p.inode} n=0000000038574f41
	[  +0.001139] FS-Cache: N-key=[8] '84643b0000000000'
	[  +3.132585] FS-Cache: Duplicate cookie detected
	[  +0.000755] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001032] FS-Cache: O-cookie d=00000000fbc4fe34{9p.inode} n=00000000e83a4aa7
	[  +0.001160] FS-Cache: O-key=[8] '83643b0000000000'
	[  +0.000753] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=00000000fbc4fe34{9p.inode} n=000000002244812d
	[  +0.001111] FS-Cache: N-key=[8] '83643b0000000000'
	[  +0.323161] FS-Cache: Duplicate cookie detected
	[  +0.000805] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001104] FS-Cache: O-cookie d=00000000fbc4fe34{9p.inode} n=0000000060b8cdea
	[  +0.001286] FS-Cache: O-key=[8] '89643b0000000000'
	[  +0.000771] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.001023] FS-Cache: N-cookie d=00000000fbc4fe34{9p.inode} n=00000000495e4eb3
	[  +0.001223] FS-Cache: N-key=[8] '89643b0000000000'
	
	* 
	* ==> etcd [1ba42ff1d58d193e5acce89455eecf48c55e07eb29aa98e95d70deae2b4543da] <==
	* {"level":"info","ts":"2023-11-14T13:44:12.052588Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-14T13:44:12.052715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-14T13:44:12.052768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2023-11-14T13:44:12.052809Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2023-11-14T13:44:12.052846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-11-14T13:44:12.052892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2023-11-14T13:44:12.052927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-11-14T13:44:12.058546Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-943397 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-14T13:44:12.058752Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T13:44:12.059839Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-14T13:44:12.065743Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T13:44:12.066895Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-11-14T13:44:12.100563Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-14T13:44:12.112248Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-14T13:44:39.506954Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-11-14T13:44:39.507Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-943397","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2023-11-14T13:44:39.507119Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-14T13:44:39.507207Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-14T13:44:39.508268Z","caller":"v3rpc/watch.go:473","msg":"failed to send watch response to gRPC stream","error":"rpc error: code = Unavailable desc = transport is closing"}
	{"level":"warn","ts":"2023-11-14T13:44:39.563096Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-11-14T13:44:39.563151Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-11-14T13:44:39.564818Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2023-11-14T13:44:39.566943Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-11-14T13:44:39.567048Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-11-14T13:44:39.567065Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-943397","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	* 
	* ==> etcd [37cd6bbdf7772a626b7015c8450804c737c41a1a4ede9f99930a41a61af0b866] <==
	* {"level":"info","ts":"2023-11-14T13:44:51.145337Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-14T13:44:51.145376Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-14T13:44:51.145876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2023-11-14T13:44:51.158622Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2023-11-14T13:44:51.158857Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T13:44:51.159895Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T13:44:51.159746Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-14T13:44:51.159777Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-11-14T13:44:51.162457Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2023-11-14T13:44:51.163092Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-14T13:44:51.163167Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-14T13:44:52.980591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2023-11-14T13:44:52.980706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2023-11-14T13:44:52.98076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2023-11-14T13:44:52.980799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2023-11-14T13:44:52.980831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2023-11-14T13:44:52.980869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2023-11-14T13:44:52.980909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2023-11-14T13:44:52.988723Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-943397 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-14T13:44:52.988819Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T13:44:52.989861Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-14T13:44:52.990136Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T13:44:52.991023Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2023-11-14T13:44:53.048575Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-14T13:44:53.048681Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  13:48:28 up 10:30,  0 users,  load average: 0.65, 0.86, 1.22
	Linux functional-943397 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [5018b414d449df49cb9ae53455e5e911f0a92cf02889f011b33434e7da88dacd] <==
	* I1114 13:44:10.626760       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1114 13:44:10.626974       1 main.go:107] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1114 13:44:10.628315       1 main.go:116] setting mtu 1500 for CNI 
	I1114 13:44:10.631515       1 main.go:146] kindnetd IP family: "ipv4"
	I1114 13:44:10.631600       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1114 13:44:14.797698       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:44:14.797975       1 main.go:227] handling current node
	I1114 13:44:24.804286       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:44:24.804314       1 main.go:227] handling current node
	I1114 13:44:34.821473       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:44:34.821505       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [6236d11e1a46a433dc35a2393e3ed698dbfac6fb2a4ce734a5055471c1c5a554] <==
	* I1114 13:46:27.032773       1 main.go:227] handling current node
	I1114 13:46:37.037816       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:46:37.037847       1 main.go:227] handling current node
	I1114 13:46:47.048345       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:46:47.048374       1 main.go:227] handling current node
	I1114 13:46:57.052031       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:46:57.052064       1 main.go:227] handling current node
	I1114 13:47:07.061872       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:47:07.061899       1 main.go:227] handling current node
	I1114 13:47:17.073742       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:47:17.073774       1 main.go:227] handling current node
	I1114 13:47:27.085293       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:47:27.085406       1 main.go:227] handling current node
	I1114 13:47:37.098376       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:47:37.098406       1 main.go:227] handling current node
	I1114 13:47:47.108598       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:47:47.108702       1 main.go:227] handling current node
	I1114 13:47:57.112973       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:47:57.113003       1 main.go:227] handling current node
	I1114 13:48:07.125203       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:48:07.125234       1 main.go:227] handling current node
	I1114 13:48:17.137903       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:48:17.137929       1 main.go:227] handling current node
	I1114 13:48:27.141672       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:48:27.141705       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [1b2f7514d7de32ba0f62958f2321d02c6f8473d644cf8d98456e6c4b2093353a] <==
	* I1114 13:44:55.491362       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1114 13:44:55.534188       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1114 13:44:55.538049       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1114 13:44:55.538195       1 aggregator.go:166] initial CRD sync complete...
	I1114 13:44:55.538233       1 autoregister_controller.go:141] Starting autoregister controller
	I1114 13:44:55.538268       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1114 13:44:55.538303       1 cache.go:39] Caches are synced for autoregister controller
	E1114 13:44:55.540617       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1114 13:44:55.564025       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1114 13:44:56.241449       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1114 13:44:57.956494       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1114 13:44:58.092806       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1114 13:44:58.101825       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1114 13:44:58.166597       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1114 13:44:58.174376       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1114 13:45:14.264328       1 controller.go:624] quota admission added evaluator for: endpoints
	I1114 13:45:14.668846       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.101.220.87"}
	I1114 13:45:14.704008       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1114 13:45:21.947052       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.252.61"}
	I1114 13:45:31.433081       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1114 13:45:31.588844       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.98.74.75"}
	I1114 13:46:07.286156       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.105.206"}
	I1114 13:47:27.280765       1 controller.go:624] quota admission added evaluator for: namespaces
	I1114 13:47:27.537696       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.93.254"}
	I1114 13:47:27.564234       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.214.234"}
	
	* 
	* ==> kube-controller-manager [16c05911025d89f6d236291fd71a58224dea4cb5e2eef46f509ede97233a50c9] <==
	* I1114 13:47:27.413468       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.64673ms"
	E1114 13:47:27.413938       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1114 13:47:27.413903       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1114 13:47:27.425734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.720277ms"
	E1114 13:47:27.427160       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1114 13:47:27.427106       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1114 13:47:27.435420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="50.05242ms"
	E1114 13:47:27.435514       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1114 13:47:27.436414       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.131453ms"
	E1114 13:47:27.436874       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1114 13:47:27.436796       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1114 13:47:27.449768       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7fd5cb4ddc-t7x6c"
	I1114 13:47:27.470341       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-sw6nw"
	I1114 13:47:27.481369       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="45.791746ms"
	I1114 13:47:27.488340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="38.438169ms"
	I1114 13:47:27.494543       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="12.362169ms"
	I1114 13:47:27.496956       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="30.95µs"
	I1114 13:47:27.498472       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="39.212µs"
	I1114 13:47:27.505203       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="16.750448ms"
	I1114 13:47:27.505306       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="65.944µs"
	I1114 13:47:27.517849       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="36.882µs"
	I1114 13:47:29.594426       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="17.052337ms"
	I1114 13:47:29.594574       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="40.459µs"
	I1114 13:47:34.609756       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.397787ms"
	I1114 13:47:34.609845       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.764µs"
	
	* 
	* ==> kube-controller-manager [3af6ca201fe654f1da80afff1604d202a84ee1ccbceddcb2d0509027bcf057eb] <==
	* I1114 13:44:27.307173       1 event.go:307] "Event occurred" object="functional-943397" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-943397 event: Registered Node functional-943397 in Controller"
	I1114 13:44:27.311458       1 shared_informer.go:318] Caches are synced for service account
	I1114 13:44:27.312637       1 shared_informer.go:318] Caches are synced for namespace
	I1114 13:44:27.313805       1 shared_informer.go:318] Caches are synced for deployment
	I1114 13:44:27.315978       1 shared_informer.go:318] Caches are synced for crt configmap
	I1114 13:44:27.318208       1 shared_informer.go:318] Caches are synced for endpoint
	I1114 13:44:27.321151       1 shared_informer.go:318] Caches are synced for cronjob
	I1114 13:44:27.323529       1 shared_informer.go:318] Caches are synced for PVC protection
	I1114 13:44:27.325738       1 shared_informer.go:318] Caches are synced for GC
	I1114 13:44:27.329079       1 shared_informer.go:318] Caches are synced for attach detach
	I1114 13:44:27.332027       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1114 13:44:27.334295       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1114 13:44:27.347709       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1114 13:44:27.397473       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1114 13:44:27.408742       1 shared_informer.go:318] Caches are synced for resource quota
	I1114 13:44:27.420924       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.824928ms"
	I1114 13:44:27.422077       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.945µs"
	I1114 13:44:27.447743       1 shared_informer.go:318] Caches are synced for disruption
	I1114 13:44:27.460684       1 shared_informer.go:318] Caches are synced for resource quota
	I1114 13:44:27.829734       1 shared_informer.go:318] Caches are synced for garbage collector
	I1114 13:44:27.829768       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1114 13:44:27.855336       1 shared_informer.go:318] Caches are synced for garbage collector
	I1114 13:44:28.135759       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.594µs"
	I1114 13:44:28.156395       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.965809ms"
	I1114 13:44:28.156627       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="110.17µs"
	
	* 
	* ==> kube-proxy [7e8b1638186580150c47ebf17621fe21ffb427adc033cd8c456209a4e3a73b55] <==
	* I1114 13:44:10.996664       1 server_others.go:69] "Using iptables proxy"
	I1114 13:44:14.815438       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1114 13:44:14.898472       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1114 13:44:14.902198       1 server_others.go:152] "Using iptables Proxier"
	I1114 13:44:14.903105       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1114 13:44:14.903183       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1114 13:44:14.903254       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1114 13:44:14.903575       1 server.go:846] "Version info" version="v1.28.3"
	I1114 13:44:14.903795       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 13:44:14.904706       1 config.go:188] "Starting service config controller"
	I1114 13:44:14.904818       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1114 13:44:14.904869       1 config.go:97] "Starting endpoint slice config controller"
	I1114 13:44:14.904899       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1114 13:44:14.905464       1 config.go:315] "Starting node config controller"
	I1114 13:44:14.905514       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1114 13:44:15.005039       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1114 13:44:15.005643       1 shared_informer.go:318] Caches are synced for service config
	I1114 13:44:15.005665       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [f9b17a2b8161aa1f90eb6142cd349d4db4f9f2c29b39407669a3195ff6e7137b] <==
	* I1114 13:44:56.728593       1 server_others.go:69] "Using iptables proxy"
	I1114 13:44:56.743194       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I1114 13:44:56.770875       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1114 13:44:56.772937       1 server_others.go:152] "Using iptables Proxier"
	I1114 13:44:56.772981       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1114 13:44:56.772989       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1114 13:44:56.773026       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1114 13:44:56.773265       1 server.go:846] "Version info" version="v1.28.3"
	I1114 13:44:56.773280       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 13:44:56.773999       1 config.go:188] "Starting service config controller"
	I1114 13:44:56.774064       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1114 13:44:56.774091       1 config.go:97] "Starting endpoint slice config controller"
	I1114 13:44:56.774096       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1114 13:44:56.774594       1 config.go:315] "Starting node config controller"
	I1114 13:44:56.774608       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1114 13:44:56.875114       1 shared_informer.go:318] Caches are synced for node config
	I1114 13:44:56.875152       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1114 13:44:56.875227       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [b6add8b6110f15d7da71176b432836001d631dd588f5f2a99461c4006fa98a31] <==
	* I1114 13:44:52.891203       1 serving.go:348] Generated self-signed cert in-memory
	W1114 13:44:55.381159       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1114 13:44:55.381196       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1114 13:44:55.381207       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1114 13:44:55.381214       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1114 13:44:55.462765       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1114 13:44:55.462889       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 13:44:55.468669       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1114 13:44:55.468792       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1114 13:44:55.471067       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1114 13:44:55.471206       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1114 13:44:55.569235       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [d0c21413f7e8853947d0eb4f03f5d4a8676ea67078cdaa90cb9276f5ca1e2c67] <==
	* I1114 13:44:12.368485       1 serving.go:348] Generated self-signed cert in-memory
	W1114 13:44:14.753853       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1114 13:44:14.753961       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1114 13:44:14.753998       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1114 13:44:14.754039       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1114 13:44:14.788337       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.3"
	I1114 13:44:14.788439       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 13:44:14.790386       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1114 13:44:14.790501       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1114 13:44:14.791065       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1114 13:44:14.794265       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1114 13:44:14.891765       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1114 13:44:39.514637       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1114 13:44:39.515118       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E1114 13:44:39.515379       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* Nov 14 13:47:34 functional-943397 kubelet[4725]: I1114 13:47:34.594010    4725 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-t7x6c" podStartSLOduration=6.132701665 podCreationTimestamp="2023-11-14 13:47:27 +0000 UTC" firstStartedPulling="2023-11-14 13:47:27.834421374 +0000 UTC m=+157.940471043" lastFinishedPulling="2023-11-14 13:47:29.295685101 +0000 UTC m=+159.401734770" observedRunningTime="2023-11-14 13:47:29.580033538 +0000 UTC m=+159.686083239" watchObservedRunningTime="2023-11-14 13:47:34.593965392 +0000 UTC m=+164.700015060"
	Nov 14 13:47:42 functional-943397 kubelet[4725]: I1114 13:47:42.119616    4725 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-sw6nw" podStartSLOduration=9.203495792 podCreationTimestamp="2023-11-14 13:47:27 +0000 UTC" firstStartedPulling="2023-11-14 13:47:27.845467613 +0000 UTC m=+157.951517290" lastFinishedPulling="2023-11-14 13:47:33.761495706 +0000 UTC m=+163.867545383" observedRunningTime="2023-11-14 13:47:34.59561759 +0000 UTC m=+164.701667259" watchObservedRunningTime="2023-11-14 13:47:42.119523885 +0000 UTC m=+172.225573562"
	Nov 14 13:47:50 functional-943397 kubelet[4725]: E1114 13:47:50.225340    4725 manager.go:1106] Failed to create existing container: /docker/082437487ccd2fdfc24f80ec82e391ba4c84d4621b33da68135960fff7a04592/crio-90b4330674ee629de1baa06454c9045800e4153b8d619de74310bb44954acbd5: Error finding container 90b4330674ee629de1baa06454c9045800e4153b8d619de74310bb44954acbd5: Status 404 returned error can't find the container with id 90b4330674ee629de1baa06454c9045800e4153b8d619de74310bb44954acbd5
	Nov 14 13:47:50 functional-943397 kubelet[4725]: E1114 13:47:50.225627    4725 manager.go:1106] Failed to create existing container: /crio-337c743d7075fd39b2977fe3bd6efcebd26cf5ea6c33b3de4da0574d32998736: Error finding container 337c743d7075fd39b2977fe3bd6efcebd26cf5ea6c33b3de4da0574d32998736: Status 404 returned error can't find the container with id 337c743d7075fd39b2977fe3bd6efcebd26cf5ea6c33b3de4da0574d32998736
	Nov 14 13:47:50 functional-943397 kubelet[4725]: E1114 13:47:50.225858    4725 manager.go:1106] Failed to create existing container: /docker/082437487ccd2fdfc24f80ec82e391ba4c84d4621b33da68135960fff7a04592/crio-db9500d2add010ef7ea819a0ad9aaac711ce9fa4b7507d13e430a9a890499fa3: Error finding container db9500d2add010ef7ea819a0ad9aaac711ce9fa4b7507d13e430a9a890499fa3: Status 404 returned error can't find the container with id db9500d2add010ef7ea819a0ad9aaac711ce9fa4b7507d13e430a9a890499fa3
	Nov 14 13:47:50 functional-943397 kubelet[4725]: E1114 13:47:50.230320    4725 manager.go:1106] Failed to create existing container: /crio-0406d552ce9791dda02d49d1168ff1ca5b36962723bea5a372a66fdc342763d9: Error finding container 0406d552ce9791dda02d49d1168ff1ca5b36962723bea5a372a66fdc342763d9: Status 404 returned error can't find the container with id 0406d552ce9791dda02d49d1168ff1ca5b36962723bea5a372a66fdc342763d9
	Nov 14 13:47:50 functional-943397 kubelet[4725]: E1114 13:47:50.230548    4725 manager.go:1106] Failed to create existing container: /docker/082437487ccd2fdfc24f80ec82e391ba4c84d4621b33da68135960fff7a04592/crio-59f2cbb98463b7a760f4b7aca63bf1868ebaa16f20cedc09d871febfc7a6f5e7: Error finding container 59f2cbb98463b7a760f4b7aca63bf1868ebaa16f20cedc09d871febfc7a6f5e7: Status 404 returned error can't find the container with id 59f2cbb98463b7a760f4b7aca63bf1868ebaa16f20cedc09d871febfc7a6f5e7
	Nov 14 13:47:50 functional-943397 kubelet[4725]: E1114 13:47:50.230773    4725 manager.go:1106] Failed to create existing container: /crio-b14524ae8978c47feadd21f4fc8e410e2cfe7acd10499894f31deb0fda104008: Error finding container b14524ae8978c47feadd21f4fc8e410e2cfe7acd10499894f31deb0fda104008: Status 404 returned error can't find the container with id b14524ae8978c47feadd21f4fc8e410e2cfe7acd10499894f31deb0fda104008
	Nov 14 13:47:50 functional-943397 kubelet[4725]: E1114 13:47:50.234061    4725 manager.go:1106] Failed to create existing container: /docker/082437487ccd2fdfc24f80ec82e391ba4c84d4621b33da68135960fff7a04592/crio-0406d552ce9791dda02d49d1168ff1ca5b36962723bea5a372a66fdc342763d9: Error finding container 0406d552ce9791dda02d49d1168ff1ca5b36962723bea5a372a66fdc342763d9: Status 404 returned error can't find the container with id 0406d552ce9791dda02d49d1168ff1ca5b36962723bea5a372a66fdc342763d9
	Nov 14 13:47:50 functional-943397 kubelet[4725]: E1114 13:47:50.236242    4725 manager.go:1106] Failed to create existing container: /docker/082437487ccd2fdfc24f80ec82e391ba4c84d4621b33da68135960fff7a04592/crio-6f8de951c5e5b94eaaf183bf8dd53461c2c61df9c60f1e1ccdf551da7ce859be: Error finding container 6f8de951c5e5b94eaaf183bf8dd53461c2c61df9c60f1e1ccdf551da7ce859be: Status 404 returned error can't find the container with id 6f8de951c5e5b94eaaf183bf8dd53461c2c61df9c60f1e1ccdf551da7ce859be
	Nov 14 13:47:50 functional-943397 kubelet[4725]: E1114 13:47:50.236429    4725 manager.go:1106] Failed to create existing container: /crio-db9500d2add010ef7ea819a0ad9aaac711ce9fa4b7507d13e430a9a890499fa3: Error finding container db9500d2add010ef7ea819a0ad9aaac711ce9fa4b7507d13e430a9a890499fa3: Status 404 returned error can't find the container with id db9500d2add010ef7ea819a0ad9aaac711ce9fa4b7507d13e430a9a890499fa3
	Nov 14 13:47:50 functional-943397 kubelet[4725]: E1114 13:47:50.236992    4725 manager.go:1106] Failed to create existing container: /crio-7aec07aff4244338c947eab6d8625e01a2243fe7ba2390476e52838f13be1ccd: Error finding container 7aec07aff4244338c947eab6d8625e01a2243fe7ba2390476e52838f13be1ccd: Status 404 returned error can't find the container with id 7aec07aff4244338c947eab6d8625e01a2243fe7ba2390476e52838f13be1ccd
	Nov 14 13:47:50 functional-943397 kubelet[4725]: E1114 13:47:50.237257    4725 manager.go:1106] Failed to create existing container: /crio-59f2cbb98463b7a760f4b7aca63bf1868ebaa16f20cedc09d871febfc7a6f5e7: Error finding container 59f2cbb98463b7a760f4b7aca63bf1868ebaa16f20cedc09d871febfc7a6f5e7: Status 404 returned error can't find the container with id 59f2cbb98463b7a760f4b7aca63bf1868ebaa16f20cedc09d871febfc7a6f5e7
	Nov 14 13:47:50 functional-943397 kubelet[4725]: E1114 13:47:50.237558    4725 manager.go:1106] Failed to create existing container: /crio-90b4330674ee629de1baa06454c9045800e4153b8d619de74310bb44954acbd5: Error finding container 90b4330674ee629de1baa06454c9045800e4153b8d619de74310bb44954acbd5: Status 404 returned error can't find the container with id 90b4330674ee629de1baa06454c9045800e4153b8d619de74310bb44954acbd5
	Nov 14 13:47:50 functional-943397 kubelet[4725]: E1114 13:47:50.237761    4725 manager.go:1106] Failed to create existing container: /crio-6f8de951c5e5b94eaaf183bf8dd53461c2c61df9c60f1e1ccdf551da7ce859be: Error finding container 6f8de951c5e5b94eaaf183bf8dd53461c2c61df9c60f1e1ccdf551da7ce859be: Status 404 returned error can't find the container with id 6f8de951c5e5b94eaaf183bf8dd53461c2c61df9c60f1e1ccdf551da7ce859be
	Nov 14 13:47:50 functional-943397 kubelet[4725]: E1114 13:47:50.237999    4725 manager.go:1106] Failed to create existing container: /docker/082437487ccd2fdfc24f80ec82e391ba4c84d4621b33da68135960fff7a04592/crio-b8293333298191346e13f2279d7755db1b681f19224427c7938c8b280754078c: Error finding container b8293333298191346e13f2279d7755db1b681f19224427c7938c8b280754078c: Status 404 returned error can't find the container with id b8293333298191346e13f2279d7755db1b681f19224427c7938c8b280754078c
	Nov 14 13:47:50 functional-943397 kubelet[4725]: E1114 13:47:50.241159    4725 manager.go:1106] Failed to create existing container: /crio-b8293333298191346e13f2279d7755db1b681f19224427c7938c8b280754078c: Error finding container b8293333298191346e13f2279d7755db1b681f19224427c7938c8b280754078c: Status 404 returned error can't find the container with id b8293333298191346e13f2279d7755db1b681f19224427c7938c8b280754078c
	Nov 14 13:47:50 functional-943397 kubelet[4725]: E1114 13:47:50.241415    4725 manager.go:1106] Failed to create existing container: /docker/082437487ccd2fdfc24f80ec82e391ba4c84d4621b33da68135960fff7a04592/crio-7aec07aff4244338c947eab6d8625e01a2243fe7ba2390476e52838f13be1ccd: Error finding container 7aec07aff4244338c947eab6d8625e01a2243fe7ba2390476e52838f13be1ccd: Status 404 returned error can't find the container with id 7aec07aff4244338c947eab6d8625e01a2243fe7ba2390476e52838f13be1ccd
	Nov 14 13:47:50 functional-943397 kubelet[4725]: E1114 13:47:50.241664    4725 manager.go:1106] Failed to create existing container: /docker/082437487ccd2fdfc24f80ec82e391ba4c84d4621b33da68135960fff7a04592/crio-b14524ae8978c47feadd21f4fc8e410e2cfe7acd10499894f31deb0fda104008: Error finding container b14524ae8978c47feadd21f4fc8e410e2cfe7acd10499894f31deb0fda104008: Status 404 returned error can't find the container with id b14524ae8978c47feadd21f4fc8e410e2cfe7acd10499894f31deb0fda104008
	Nov 14 13:47:50 functional-943397 kubelet[4725]: E1114 13:47:50.247187    4725 manager.go:1106] Failed to create existing container: /docker/082437487ccd2fdfc24f80ec82e391ba4c84d4621b33da68135960fff7a04592/crio-337c743d7075fd39b2977fe3bd6efcebd26cf5ea6c33b3de4da0574d32998736: Error finding container 337c743d7075fd39b2977fe3bd6efcebd26cf5ea6c33b3de4da0574d32998736: Status 404 returned error can't find the container with id 337c743d7075fd39b2977fe3bd6efcebd26cf5ea6c33b3de4da0574d32998736
	Nov 14 13:48:12 functional-943397 kubelet[4725]: E1114 13:48:12.380232    4725 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 14 13:48:12 functional-943397 kubelet[4725]: E1114 13:48:12.380286    4725 kuberuntime_image.go:53] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Nov 14 13:48:12 functional-943397 kubelet[4725]: E1114 13:48:12.380374    4725 kuberuntime_manager.go:1256] container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-sxb7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod sp-pod_default(7e9ee6a6-812b-
44d9-b39f-38654793eb3b): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 14 13:48:12 functional-943397 kubelet[4725]: E1114 13:48:12.380411    4725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="7e9ee6a6-812b-44d9-b39f-38654793eb3b"
	Nov 14 13:48:23 functional-943397 kubelet[4725]: E1114 13:48:23.096439    4725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="7e9ee6a6-812b-44d9-b39f-38654793eb3b"
	
	* 
	* ==> kubernetes-dashboard [4c07326ce3a369ce6016e6e487bf2e76f83d24f39208d5244d9ea7e8a4b09309] <==
	* 2023/11/14 13:47:33 Using namespace: kubernetes-dashboard
	2023/11/14 13:47:33 Using in-cluster config to connect to apiserver
	2023/11/14 13:47:33 Using secret token for csrf signing
	2023/11/14 13:47:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/11/14 13:47:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/11/14 13:47:33 Successful initial request to the apiserver, version: v1.28.3
	2023/11/14 13:47:33 Generating JWE encryption key
	2023/11/14 13:47:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/11/14 13:47:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/11/14 13:47:35 Initializing JWE encryption key from synchronized object
	2023/11/14 13:47:35 Creating in-cluster Sidecar client
	2023/11/14 13:47:35 Successful request to sidecar
	2023/11/14 13:47:35 Serving insecurely on HTTP port: 9090
	2023/11/14 13:47:33 Starting overwatch
	
	* 
	* ==> storage-provisioner [1343a61f72a71c040684e4e7131a44f3b51332f47b2e5f5c4657ee6a06b6844c] <==
	* I1114 13:44:11.167917       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1114 13:44:14.813769       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1114 13:44:14.813953       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1114 13:44:32.233102       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1114 13:44:32.233543       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-943397_13517fee-054a-457f-a5e8-5ca4ba11c618!
	I1114 13:44:32.235545       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa1b5c28-1334-48c0-b035-2a3deca85c3c", APIVersion:"v1", ResourceVersion:"574", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-943397_13517fee-054a-457f-a5e8-5ca4ba11c618 became leader
	I1114 13:44:32.343313       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-943397_13517fee-054a-457f-a5e8-5ca4ba11c618!
	
	* 
	* ==> storage-provisioner [65df096e597f841e4b024490ae147704ce08ee007fa4d1729f06eee9cb6812a6] <==
	* I1114 13:44:56.687511       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1114 13:44:56.703034       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1114 13:44:56.703117       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1114 13:45:14.267108       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1114 13:45:14.267292       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-943397_5a9feae8-da34-4e54-9900-e8ae2dee60de!
	I1114 13:45:14.269874       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa1b5c28-1334-48c0-b035-2a3deca85c3c", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-943397_5a9feae8-da34-4e54-9900-e8ae2dee60de became leader
	I1114 13:45:14.367911       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-943397_5a9feae8-da34-4e54-9900-e8ae2dee60de!
	I1114 13:45:26.319646       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1114 13:45:26.323902       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"3d6bde1f-e437-4bb7-8ba7-c388300dee97", APIVersion:"v1", ResourceVersion:"724", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1114 13:45:26.319780       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    76beceae-ac6e-44e5-aacb-75507923aac5 410 0 2023-11-14 13:43:16 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-11-14 13:43:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-3d6bde1f-e437-4bb7-8ba7-c388300dee97 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  3d6bde1f-e437-4bb7-8ba7-c388300dee97 724 0 2023-11-14 13:45:26 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-11-14 13:45:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-11-14 13:45:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1114 13:45:26.327548       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-3d6bde1f-e437-4bb7-8ba7-c388300dee97" provisioned
	I1114 13:45:26.327636       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1114 13:45:26.327680       1 volume_store.go:212] Trying to save persistentvolume "pvc-3d6bde1f-e437-4bb7-8ba7-c388300dee97"
	I1114 13:45:26.363072       1 volume_store.go:219] persistentvolume "pvc-3d6bde1f-e437-4bb7-8ba7-c388300dee97" saved
	I1114 13:45:26.366186       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"3d6bde1f-e437-4bb7-8ba7-c388300dee97", APIVersion:"v1", ResourceVersion:"724", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-3d6bde1f-e437-4bb7-8ba7-c388300dee97
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-943397 -n functional-943397
helpers_test.go:261: (dbg) Run:  kubectl --context functional-943397 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-943397 describe pod busybox-mount sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-943397 describe pod busybox-mount sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-943397/192.168.49.2
	Start Time:       Tue, 14 Nov 2023 13:46:20 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://dfbd81b97fd52590e796f4fa9e1f060b7cbe723b3daffe2a4bbfc381518a897e
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Tue, 14 Nov 2023 13:47:16 +0000
	      Finished:     Tue, 14 Nov 2023 13:47:16 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-twgtd (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-twgtd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  2m10s  default-scheduler  Successfully assigned default/busybox-mount to functional-943397
	  Normal  Pulling    2m10s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     74s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.969s (55.824s including waiting)
	  Normal  Created    74s    kubelet            Created container mount-munger
	  Normal  Started    74s    kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-943397/192.168.49.2
	Start Time:       Tue, 14 Nov 2023 13:45:26 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sxb7d (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-sxb7d:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m4s                 default-scheduler  Successfully assigned default/sp-pod to functional-943397
	  Warning  Failed     76s (x2 over 2m33s)  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:565211f0ec2c97f4118c0c1b6be7f1c7775c0b3d44c2bb72bd32983a5696aa6a in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    48s (x3 over 3m4s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     18s (x3 over 2m33s)  kubelet            Error: ErrImagePull
	  Warning  Failed     18s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    7s (x3 over 2m32s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     7s (x3 over 2m32s)   kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (189.25s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (363.7s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-814110 addons enable ingress --alsologtostderr -v=5
E1114 13:50:20.963997 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 13:50:20.969297 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 13:50:20.979538 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 13:50:20.999819 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 13:50:21.040143 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 13:50:21.120538 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 13:50:21.281004 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 13:50:21.601565 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 13:50:22.241907 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 13:50:23.522555 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 13:50:26.083759 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 13:50:31.204312 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 13:50:41.445395 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 13:51:01.926504 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 13:51:42.887576 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 13:52:14.368910 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
E1114 13:53:04.808343 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 13:55:20.963889 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 13:55:48.649492 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ingress-addon-legacy-814110 addons enable ingress --alsologtostderr -v=5: exit status 10 (6m1.079715619s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 13:50:05.375115 1223285 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:50:05.375684 1223285 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:50:05.375718 1223285 out.go:309] Setting ErrFile to fd 2...
	I1114 13:50:05.375743 1223285 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:50:05.376067 1223285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
	I1114 13:50:05.376447 1223285 mustload.go:65] Loading cluster: ingress-addon-legacy-814110
	I1114 13:50:05.376914 1223285 config.go:182] Loaded profile config "ingress-addon-legacy-814110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1114 13:50:05.376979 1223285 addons.go:594] checking whether the cluster is paused
	I1114 13:50:05.377132 1223285 config.go:182] Loaded profile config "ingress-addon-legacy-814110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1114 13:50:05.377195 1223285 host.go:66] Checking if "ingress-addon-legacy-814110" exists ...
	I1114 13:50:05.377770 1223285 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-814110 --format={{.State.Status}}
	I1114 13:50:05.406093 1223285 ssh_runner.go:195] Run: systemctl --version
	I1114 13:50:05.406162 1223285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:50:05.437996 1223285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa Username:docker}
	I1114 13:50:05.539466 1223285 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 13:50:05.539575 1223285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 13:50:05.594022 1223285 cri.go:89] found id: "e2dfea6db0038b864200055ec6a5d37fcf9105316391feb798764e2953c92119"
	I1114 13:50:05.594061 1223285 cri.go:89] found id: "2cf9960ed4483ad88f1ec2b9a17f53e101cd65dcbc0dc50339d31b26821cb572"
	I1114 13:50:05.594068 1223285 cri.go:89] found id: "1233093423d4dd2cd51a48a275bd031397a8c8cc3f80caad8be47bdf0ce8d792"
	I1114 13:50:05.594072 1223285 cri.go:89] found id: "3ff47c9dd0749b62e315cd73745025e93b632a3b2359ad311a1319f9c6db623c"
	I1114 13:50:05.594076 1223285 cri.go:89] found id: "4e5d19b2f0e8227e0d1d4a26093125a427c7b7538a1409bf2f30f3c0ad038fba"
	I1114 13:50:05.594081 1223285 cri.go:89] found id: "c3514bf1a6e6b88e0d142b54a893762bacd6330d9afa8404a5bf8e09137177a0"
	I1114 13:50:05.594085 1223285 cri.go:89] found id: "1e9198b4f97a6f3d51b839bca467dcfafccfb90dc04c16d668c85ab153a5b7fd"
	I1114 13:50:05.594089 1223285 cri.go:89] found id: "92ecb93026e45063bce6707ecf57a8e58efb9cfb6c1adbfefaa07a4540a4f13a"
	I1114 13:50:05.594093 1223285 cri.go:89] found id: ""
	I1114 13:50:05.594153 1223285 ssh_runner.go:195] Run: sudo runc list -f json
	I1114 13:50:05.623107 1223285 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"1233093423d4dd2cd51a48a275bd031397a8c8cc3f80caad8be47bdf0ce8d792","pid":2134,"status":"running","bundle":"/run/containers/storage/overlay-containers/1233093423d4dd2cd51a48a275bd031397a8c8cc3f80caad8be47bdf0ce8d792/userdata","rootfs":"/var/lib/containers/storage/overlay/d88050410eb8b48b2323e5d57e67b9661fdc9eec2540709f448dc90e8de031b4/merged","created":"2023-11-14T13:49:51.004396731Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"98852511","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"98852511\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminatio
nMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1233093423d4dd2cd51a48a275bd031397a8c8cc3f80caad8be47bdf0ce8d792","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-14T13:49:50.942828223Z","io.kubernetes.cri-o.Image":"docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-66n2z\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"6e8db226-b9d6-49cd-af22-fcb350c5de74\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-66n2z_6e8db226-b9d6-49cd-af22-fcb350c5de74/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\
"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d88050410eb8b48b2323e5d57e67b9661fdc9eec2540709f448dc90e8de031b4/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-66n2z_kube-system_6e8db226-b9d6-49cd-af22-fcb350c5de74_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4eba235f78dc75f632bd983c824a5aa72e2065d95bb9eafd11367d7fbde62974/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4eba235f78dc75f632bd983c824a5aa72e2065d95bb9eafd11367d7fbde62974","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-66n2z_kube-system_6e8db226-b9d6-49cd-af22-fcb350c5de74_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"r
eadonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/6e8db226-b9d6-49cd-af22-fcb350c5de74/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/6e8db226-b9d6-49cd-af22-fcb350c5de74/containers/kindnet-cni/d026591d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/6e8db226-b9d6-49cd-af22-fcb350c5de74/volumes/kubernetes.io~secret/kindnet-token-fv75l\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-66n2z","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"6e8db
226-b9d6-49cd-af22-fcb350c5de74","kubernetes.io/config.seen":"2023-11-14T13:49:48.635811751Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1e9198b4f97a6f3d51b839bca467dcfafccfb90dc04c16d668c85ab153a5b7fd","pid":1504,"status":"running","bundle":"/run/containers/storage/overlay-containers/1e9198b4f97a6f3d51b839bca467dcfafccfb90dc04c16d668c85ab153a5b7fd/userdata","rootfs":"/var/lib/containers/storage/overlay/21a75626cf449c8a72a23cb7d4beeddf23ff884ca31023901073617b6a375e22/merged","created":"2023-11-14T13:49:22.444712083Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"af82c9d4","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"af82c9d4\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.termi
nationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1e9198b4f97a6f3d51b839bca467dcfafccfb90dc04c16d668c85ab153a5b7fd","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-14T13:49:22.200958891Z","io.kubernetes.cri-o.Image":"ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/etcd:3.4.3-0","io.kubernetes.cri-o.ImageRef":"ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-ingress-addon-legacy-814110\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"3e73aa66ca76418b5fefeb10e851549a\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-ingress-addon-legacy-814110_3e73aa66ca76418b5fefeb10e851549a/etcd/0.log","io.kubernetes.
cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/21a75626cf449c8a72a23cb7d4beeddf23ff884ca31023901073617b6a375e22/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-ingress-addon-legacy-814110_kube-system_3e73aa66ca76418b5fefeb10e851549a_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4c9e140679323ce35e98442e02a47c80f800ee775d5deb1cef2db0f9eef9c2fe/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4c9e140679323ce35e98442e02a47c80f800ee775d5deb1cef2db0f9eef9c2fe","io.kubernetes.cri-o.SandboxName":"k8s_etcd-ingress-addon-legacy-814110_kube-system_3e73aa66ca76418b5fefeb10e851549a_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/3e73aa66ca76418b5fefeb10e851549a/etc-hosts\",\"readonly\":false,\"propagation\":0,\
"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/3e73aa66ca76418b5fefeb10e851549a/containers/etcd/b0b950dc\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-ingress-addon-legacy-814110","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"3e73aa66ca76418b5fefeb10e851549a","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"3e73aa66ca76418b5fefeb10e851549a","kubernetes.io/config.seen":"2023-11-14T13:49:17.921492954Z","kubernetes.io/config.source":"file"},"owner":"root"},{"oci
Version":"1.0.2-dev","id":"2cf9960ed4483ad88f1ec2b9a17f53e101cd65dcbc0dc50339d31b26821cb572","pid":2234,"status":"running","bundle":"/run/containers/storage/overlay-containers/2cf9960ed4483ad88f1ec2b9a17f53e101cd65dcbc0dc50339d31b26821cb572/userdata","rootfs":"/var/lib/containers/storage/overlay/f5d2bbe6fcc7fc7340cb7e70a6c5507ab1b46bf29572beb4d6c6f83579811fc1/merged","created":"2023-11-14T13:49:58.671451116Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"20babdea","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"20babdea\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminatio
nGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"2cf9960ed4483ad88f1ec2b9a17f53e101cd65dcbc0dc50339d31b26821cb572","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-14T13:49:58.621853164Z","io.kubernetes.cri-o.Image":"gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"f869d2a3-1807-444a-9049-9298cc449066\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_f869d2a3-1807-444a-9049-9298cc449066/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.ku
bernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f5d2bbe6fcc7fc7340cb7e70a6c5507ab1b46bf29572beb4d6c6f83579811fc1/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_f869d2a3-1807-444a-9049-9298cc449066_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7eb3670aa5439b86bd1c8344d2cc1bed2238a9fe9b37923337e35befe3955709/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7eb3670aa5439b86bd1c8344d2cc1bed2238a9fe9b37923337e35befe3955709","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_f869d2a3-1807-444a-9049-9298cc449066_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/f869d2a3-1807-
444a-9049-9298cc449066/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/f869d2a3-1807-444a-9049-9298cc449066/containers/storage-provisioner/07033134\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/f869d2a3-1807-444a-9049-9298cc449066/volumes/kubernetes.io~secret/storage-provisioner-token-b5vn6\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"f869d2a3-1807-444a-9049-9298cc449066","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provis
ioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2023-11-14T13:49:56.381151009Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3ff47c9dd0749b62e315cd73745025e93b632a3b2359ad311a1319f9c6db623c","pid":2019,"status":"running","bundle":"/run/containers/storage/overlay-containers/3ff47c9dd0749b62e315cd73745025e93b632a3b2359ad311a1319f9c6db623c/userdata","rootfs":"/var/lib/containers/storage/overlay/aaaa2f498b2b6b09a7bd9f1d77f8956bcb726a3c4cc80cd476c32802dfaae445/merged","created":"2023-11-14T13:49:48.748465546Z","
annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"81090204","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"81090204\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"3ff47c9dd0749b62e315cd73745025e93b632a3b2359ad311a1319f9c6db623c","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-14T13:49:48.659645031Z","io.kubernetes.cri-o.Image":"565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-proxy:v1.18.20","io.kubernetes.cri-o.ImageR
ef":"565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-n98c2\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"efab5402-60be-4b66-b02e-7954cd10b4a2\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-n98c2_efab5402-60be-4b66-b02e-7954cd10b4a2/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/aaaa2f498b2b6b09a7bd9f1d77f8956bcb726a3c4cc80cd476c32802dfaae445/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-n98c2_kube-system_efab5402-60be-4b66-b02e-7954cd10b4a2_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/80b0c62f16b2f9635339ae03abf179f027f1a1fa7627946af910577f5ece06bc/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"80b0c62f16b2f9635339ae03abf179f027f1a1fa7627946af910577f5ece06bc","io
.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-n98c2_kube-system_efab5402-60be-4b66-b02e-7954cd10b4a2_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/efab5402-60be-4b66-b02e-7954cd10b4a2/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/efab5402-60be-4b66-b02e-7954cd10b4a2/containers/kube-proxy/2fa144ff\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\"
:\"/var/lib/kubelet/pods/efab5402-60be-4b66-b02e-7954cd10b4a2/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/efab5402-60be-4b66-b02e-7954cd10b4a2/volumes/kubernetes.io~secret/kube-proxy-token-ggvrt\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-n98c2","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"efab5402-60be-4b66-b02e-7954cd10b4a2","kubernetes.io/config.seen":"2023-11-14T13:49:48.283744515Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4e5d19b2f0e8227e0d1d4a26093125a427c7b7538a1409bf2f30f3c0ad038fba","pid":1533,"status":"running","bundle":"/run/containers/storage/overlay-containers/4e5d19b2f0e8227e0d1d4a26093125a427c7b7538a1409bf2f30f3c0ad038fba/userdata","rootfs":"/var/lib/co
ntainers/storage/overlay/4835f1fdc52f89a14e63513fa1162852a89a8e34194dee13482e15d2339d46e2/merged","created":"2023-11-14T13:49:22.301341145Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"ce880c0b","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ce880c0b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"4e5d19b2f0e8227e0d1d4a26093125a427c7b7538a1409bf2f30f3c0ad038fba","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-14T13:49:22.25106766Z","io.kubernetes.cri-o.Image":"68a4
fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-controller-manager:v1.18.20","io.kubernetes.cri-o.ImageRef":"68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-ingress-addon-legacy-814110\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"49b043cd68fd30a453bdf128db5271f3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-ingress-addon-legacy-814110_49b043cd68fd30a453bdf128db5271f3/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/4835f1fdc52f89a14e63513fa1162852a89a8e34194dee13482e15d2339d46e2/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-ingress-addon-legacy-814110_ku
be-system_49b043cd68fd30a453bdf128db5271f3_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d5943c1440d011389d44de09235775383c013e43a16d03fa5c3fb7d0521679e0/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"d5943c1440d011389d44de09235775383c013e43a16d03fa5c3fb7d0521679e0","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-ingress-addon-legacy-814110_kube-system_49b043cd68fd30a453bdf128db5271f3_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/49b043cd68fd30a453bdf128db5271f3/containers/kube-controller-manager/ae0bce1b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path
\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/49b043cd68fd30a453bdf128db5271f3/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-pl
ugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-ingress-addon-legacy-814110","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"49b043cd68fd30a453bdf128db5271f3","kubernetes.io/config.hash":"49b043cd68fd30a453bdf128db5271f3","kubernetes.io/config.seen":"2023-11-14T13:49:17.918344052Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"92ecb93026e45063bce6707ecf57a8e58efb9cfb6c1adbfefaa07a4540a4f13a","pid":1450,"status":"running","bundle":"/run/containers/storage/overlay-containers/92ecb93026e45063bce6707ecf57a8e58efb9cfb6c1adbfefaa07a4540a4f13a/userdata","rootfs":"/var/lib/containers/storage/overlay/c852fedc11939d91e02c53408f99f7f2672826cf688c8bfe83d4830dc4ac078d/merged","created":"2023-11-14T13:49:22.095630897Z","annotations":{"io.container.manager":"cri
-o","io.kubernetes.container.hash":"ef5ef709","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"ef5ef709\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"92ecb93026e45063bce6707ecf57a8e58efb9cfb6c1adbfefaa07a4540a4f13a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-14T13:49:22.037390974Z","io.kubernetes.cri-o.Image":"095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-scheduler:v1.18.20","io.kubernetes.cri-o.ImageRef":"095f37015706de6eedb4f57eb2f9
a25a1e3bf4bec63d50ba73f8968ef4094fd1","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-ingress-addon-legacy-814110\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d12e497b0008e22acbcd5a9cf2dd48ac\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-ingress-addon-legacy-814110_d12e497b0008e22acbcd5a9cf2dd48ac/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c852fedc11939d91e02c53408f99f7f2672826cf688c8bfe83d4830dc4ac078d/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-ingress-addon-legacy-814110_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/4d92df1fe9b1c49dc4cb58132d77b7fa1bf0d05c773a5b629b41be50be1df85f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"4d92df1fe9b1c49dc4cb
58132d77b7fa1bf0d05c773a5b629b41be50be1df85f","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-ingress-addon-legacy-814110_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d12e497b0008e22acbcd5a9cf2dd48ac/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d12e497b0008e22acbcd5a9cf2dd48ac/containers/kube-scheduler/ae79b580\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-ingress-addon-legacy-814110","io.kubernetes.pod.names
pace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d12e497b0008e22acbcd5a9cf2dd48ac","kubernetes.io/config.hash":"d12e497b0008e22acbcd5a9cf2dd48ac","kubernetes.io/config.seen":"2023-11-14T13:49:17.920001551Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c3514bf1a6e6b88e0d142b54a893762bacd6330d9afa8404a5bf8e09137177a0","pid":1524,"status":"running","bundle":"/run/containers/storage/overlay-containers/c3514bf1a6e6b88e0d142b54a893762bacd6330d9afa8404a5bf8e09137177a0/userdata","rootfs":"/var/lib/containers/storage/overlay/d98e398b2ff8213945fa24a7058d112d08de79d2611f364cda062ce90cd4a2bb/merged","created":"2023-11-14T13:49:22.294804072Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"fd1dd8ff","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy"
:"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"fd1dd8ff\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"c3514bf1a6e6b88e0d142b54a893762bacd6330d9afa8404a5bf8e09137177a0","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-14T13:49:22.224199527Z","io.kubernetes.cri-o.Image":"2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/kube-apiserver:v1.18.20","io.kubernetes.cri-o.ImageRef":"2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-ingress-addon-legacy-814110\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.
pod.uid\":\"78b40af95c64e5112ac985f00b18628c\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-ingress-addon-legacy-814110_78b40af95c64e5112ac985f00b18628c/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d98e398b2ff8213945fa24a7058d112d08de79d2611f364cda062ce90cd4a2bb/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-ingress-addon-legacy-814110_kube-system_78b40af95c64e5112ac985f00b18628c_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/6e1fc3b419df23d749e1b891e98ff510cf92b88d45e36557d2e4b5381a6ba735/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"6e1fc3b419df23d749e1b891e98ff510cf92b88d45e36557d2e4b5381a6ba735","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-ingress-addon-legacy-814110_kube-system_78b40af95c64e5112ac985f00b18628c_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io
.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/78b40af95c64e5112ac985f00b18628c/containers/kube-apiserver/d4774868\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/78b40af95c64e5112ac985f00b18628c/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube
/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-ingress-addon-legacy-814110","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"78b40af95c64e5112ac985f00b18628c","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"78b40af95c64e5112ac985f00b18628c","kubernetes.io/config.seen":"2023-11-14T13:49:17.916932516Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e2dfea6db0038b864200055ec6a5d37fcf9105316391feb798764e2953c92119","pid":2293,"status":"running","bundle":"/run/containers/storage/overlay-containers/e2dfea6db0038b864200055ec6a5d37fcf9105316391feb798764e2953c92119/userdata","rootfs":"/var/lib/containers/storage/
overlay/011e63d4039bc4594a851e1fdda32b81559a356e433fe7cd90de7256445a118c/merged","created":"2023-11-14T13:50:01.553499617Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"691bc4","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"691bc4\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\
",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e2dfea6db0038b864200055ec6a5d37fcf9105316391feb798764e2953c92119","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-11-14T13:50:01.515945098Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184","io.kubernetes.cri-o.ImageName":"k8s.gcr.io/coredns:1.6.7","io.kubernetes.cri-o.ImageRef":"6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-66bff467f8-8k4sx\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9f56ab7a-445a-4d18-9860-986f9b7ddbb0\"}","io.kubernetes.cri-o.LogPath":
"/var/log/pods/kube-system_coredns-66bff467f8-8k4sx_9f56ab7a-445a-4d18-9860-986f9b7ddbb0/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/011e63d4039bc4594a851e1fdda32b81559a356e433fe7cd90de7256445a118c/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-66bff467f8-8k4sx_kube-system_9f56ab7a-445a-4d18-9860-986f9b7ddbb0_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/16e72ca153e72807f41bd96bdb935945372e6e066b9a0fd8c7cb8ddfb315fa2f/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"16e72ca153e72807f41bd96bdb935945372e6e066b9a0fd8c7cb8ddfb315fa2f","io.kubernetes.cri-o.SandboxName":"k8s_coredns-66bff467f8-8k4sx_kube-system_9f56ab7a-445a-4d18-9860-986f9b7ddbb0_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"
host_path\":\"/var/lib/kubelet/pods/9f56ab7a-445a-4d18-9860-986f9b7ddbb0/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9f56ab7a-445a-4d18-9860-986f9b7ddbb0/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9f56ab7a-445a-4d18-9860-986f9b7ddbb0/containers/coredns/e4ce512f\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/9f56ab7a-445a-4d18-9860-986f9b7ddbb0/volumes/kubernetes.io~secret/coredns-token-hcjp6\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-66bff467f8-8k4sx","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9f56ab7a-4
45a-4d18-9860-986f9b7ddbb0","kubernetes.io/config.seen":"2023-11-14T13:50:01.139577116Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I1114 13:50:05.623863 1223285 cri.go:126] list returned 8 containers
	I1114 13:50:05.623903 1223285 cri.go:129] container: {ID:1233093423d4dd2cd51a48a275bd031397a8c8cc3f80caad8be47bdf0ce8d792 Status:running}
	I1114 13:50:05.623932 1223285 cri.go:135] skipping {1233093423d4dd2cd51a48a275bd031397a8c8cc3f80caad8be47bdf0ce8d792 running}: state = "running", want "paused"
	I1114 13:50:05.623957 1223285 cri.go:129] container: {ID:1e9198b4f97a6f3d51b839bca467dcfafccfb90dc04c16d668c85ab153a5b7fd Status:running}
	I1114 13:50:05.623990 1223285 cri.go:135] skipping {1e9198b4f97a6f3d51b839bca467dcfafccfb90dc04c16d668c85ab153a5b7fd running}: state = "running", want "paused"
	I1114 13:50:05.624018 1223285 cri.go:129] container: {ID:2cf9960ed4483ad88f1ec2b9a17f53e101cd65dcbc0dc50339d31b26821cb572 Status:running}
	I1114 13:50:05.624042 1223285 cri.go:135] skipping {2cf9960ed4483ad88f1ec2b9a17f53e101cd65dcbc0dc50339d31b26821cb572 running}: state = "running", want "paused"
	I1114 13:50:05.624064 1223285 cri.go:129] container: {ID:3ff47c9dd0749b62e315cd73745025e93b632a3b2359ad311a1319f9c6db623c Status:running}
	I1114 13:50:05.624097 1223285 cri.go:135] skipping {3ff47c9dd0749b62e315cd73745025e93b632a3b2359ad311a1319f9c6db623c running}: state = "running", want "paused"
	I1114 13:50:05.624123 1223285 cri.go:129] container: {ID:4e5d19b2f0e8227e0d1d4a26093125a427c7b7538a1409bf2f30f3c0ad038fba Status:running}
	I1114 13:50:05.624146 1223285 cri.go:135] skipping {4e5d19b2f0e8227e0d1d4a26093125a427c7b7538a1409bf2f30f3c0ad038fba running}: state = "running", want "paused"
	I1114 13:50:05.624167 1223285 cri.go:129] container: {ID:92ecb93026e45063bce6707ecf57a8e58efb9cfb6c1adbfefaa07a4540a4f13a Status:running}
	I1114 13:50:05.624199 1223285 cri.go:135] skipping {92ecb93026e45063bce6707ecf57a8e58efb9cfb6c1adbfefaa07a4540a4f13a running}: state = "running", want "paused"
	I1114 13:50:05.624221 1223285 cri.go:129] container: {ID:c3514bf1a6e6b88e0d142b54a893762bacd6330d9afa8404a5bf8e09137177a0 Status:running}
	I1114 13:50:05.624241 1223285 cri.go:135] skipping {c3514bf1a6e6b88e0d142b54a893762bacd6330d9afa8404a5bf8e09137177a0 running}: state = "running", want "paused"
	I1114 13:50:05.624262 1223285 cri.go:129] container: {ID:e2dfea6db0038b864200055ec6a5d37fcf9105316391feb798764e2953c92119 Status:running}
	I1114 13:50:05.624282 1223285 cri.go:135] skipping {e2dfea6db0038b864200055ec6a5d37fcf9105316391feb798764e2953c92119 running}: state = "running", want "paused"
	I1114 13:50:05.627229 1223285 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1114 13:50:05.629242 1223285 config.go:182] Loaded profile config "ingress-addon-legacy-814110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1114 13:50:05.629274 1223285 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-814110"
	I1114 13:50:05.629283 1223285 addons.go:231] Setting addon ingress=true in "ingress-addon-legacy-814110"
	I1114 13:50:05.629338 1223285 host.go:66] Checking if "ingress-addon-legacy-814110" exists ...
	I1114 13:50:05.629784 1223285 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-814110 --format={{.State.Status}}
	I1114 13:50:05.649803 1223285 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I1114 13:50:05.651747 1223285 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1114 13:50:05.653819 1223285 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1114 13:50:05.655831 1223285 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1114 13:50:05.655852 1223285 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I1114 13:50:05.655922 1223285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:50:05.673523 1223285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa Username:docker}
	I1114 13:50:05.786855 1223285 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1114 13:50:06.339543 1223285 addons.go:467] Verifying addon ingress=true in "ingress-addon-legacy-814110"
	I1114 13:50:06.341936 1223285 out.go:177] * Verifying ingress addon...
	I1114 13:50:06.345299 1223285 kapi.go:59] client config for ingress-addon-legacy-814110: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.key", CAFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 13:50:06.346117 1223285 cert_rotation.go:137] Starting client certificate rotation controller
	I1114 13:50:06.346561 1223285 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1114 13:50:06.369818 1223285 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1114 13:50:06.369854 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:06.380129 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:06.885094 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:07.385176 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:07.885140 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:08.384686 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:08.884585 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:09.384085 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:09.884104 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:10.384459 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:10.886534 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:11.385298 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:11.885014 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:12.384326 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:12.884776 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:13.385063 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:13.884178 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:14.384524 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:14.884968 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:15.384384 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:15.885003 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:16.385224 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:16.884351 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:17.385023 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:17.884074 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:18.384299 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:18.884526 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:19.385521 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:19.884206 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:20.384878 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:20.884670 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:21.384106 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:21.884369 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:22.386337 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:22.884656 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:23.384126 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:23.884325 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:24.384872 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:24.884168 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:25.384795 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:25.884239 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:26.386364 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:26.884731 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:27.384081 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:27.884202 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:28.384722 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:28.883916 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:29.384188 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:29.884661 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:30.384120 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:30.884630 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:31.384339 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:31.884428 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:32.385012 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:32.885297 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:33.384891 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:33.884215 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:34.384704 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:34.884854 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:35.384713 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:35.883966 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:36.384000 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:36.884666 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:37.383923 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:37.884513 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:38.385068 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:38.884147 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:39.384674 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:39.884128 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:40.384988 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:40.884008 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:41.385080 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:41.884426 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:42.385130 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:42.884649 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:43.384064 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:43.883956 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:44.384260 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:44.884491 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:45.384361 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:45.885068 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:46.384628 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:46.883855 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:47.384319 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:47.884666 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:48.384058 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:48.884517 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:49.384845 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:49.883916 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:50.384378 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:50.886522 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:51.385034 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:51.883925 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:52.384122 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:52.884766 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:53.384272 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:53.890771 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:54.384153 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:54.884727 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:55.385531 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:55.884864 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:56.392342 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:56.884855 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:57.384249 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:57.884726 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:58.385002 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:58.884256 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:59.384479 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:50:59.884754 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:00.385708 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:00.884073 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:01.385012 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:01.884162 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:02.385098 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:02.884842 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:03.384775 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:03.885086 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:04.384352 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:04.884591 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:05.384297 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:05.884636 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:06.384741 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:06.884028 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:07.384201 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:07.884760 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:08.384309 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:08.884739 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:09.385011 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:09.884087 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:10.384613 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:10.885188 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:11.384827 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:11.884091 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:12.384361 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:12.885047 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:13.384435 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:13.885183 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:14.384711 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:14.884162 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:15.384662 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:15.884029 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:16.384535 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:16.884080 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:17.384739 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:17.884032 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:18.383900 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:18.884042 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:19.384401 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:19.884655 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:20.384291 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:20.885011 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:21.384564 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:21.885102 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:22.384400 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:22.884842 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:23.383916 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:23.884296 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:24.384801 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:24.884751 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:25.385767 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:25.884127 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:26.384863 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:26.884105 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:27.384075 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:27.884361 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:28.384834 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:28.884690 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:29.384084 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:29.884431 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:30.384595 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:30.885287 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:31.384790 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:31.884664 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:32.384259 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:32.884339 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:33.384760 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:33.884063 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:34.384104 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:34.884376 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:35.384819 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:35.883938 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:36.384329 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:36.884631 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:37.383862 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:37.883864 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:38.384497 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:38.884971 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:39.384333 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:39.884623 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:40.384155 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:40.885510 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:41.391119 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:41.884429 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:42.384824 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:42.884286 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:43.384686 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:43.883926 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:44.384642 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:44.883920 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:45.384857 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:45.884109 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:46.384579 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:46.885007 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:47.384930 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:47.884216 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:48.384953 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:48.884228 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:49.385432 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:49.885024 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:50.384151 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:50.884462 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:51.384761 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:51.884009 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:52.384302 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:52.884768 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:53.384198 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:53.884488 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:54.384839 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:54.884029 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:55.384477 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:55.885040 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:56.401381 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:56.884778 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:57.384119 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:57.884698 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:58.384009 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:58.884405 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:59.384755 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:51:59.884067 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:00.384403 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:00.884809 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:01.384039 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:01.884046 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:02.384493 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:02.884801 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:03.384667 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:03.883847 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:04.384036 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:04.884347 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:05.384566 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:05.885001 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:06.384809 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:06.884003 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:07.384626 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:07.883826 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:08.384143 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:08.884694 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:09.384030 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:09.883903 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:10.384177 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:10.884364 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:11.384676 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:11.883903 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:12.384103 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:12.883907 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:13.383928 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:13.884140 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:14.384730 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:14.885456 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:15.386433 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:15.884958 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:16.384813 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:16.884068 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:17.384269 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:17.884999 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:18.384103 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:18.884648 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:19.383921 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:19.884143 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:20.384624 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:20.883943 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:21.384063 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:21.884003 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:22.383959 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:22.884161 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:23.384569 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:23.883862 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:24.384869 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:24.884243 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:25.385195 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:25.884762 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:26.384449 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:26.884907 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:27.384393 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:27.884976 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:28.384036 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:28.884425 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:29.384909 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:29.884043 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:30.384443 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:30.884927 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:31.384351 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:31.884747 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:32.384916 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:32.883916 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:33.384281 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:33.884500 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:34.384933 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:34.884023 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:35.384869 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:35.884287 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:36.384719 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:36.884259 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:37.384992 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:37.884583 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:38.385336 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:38.884692 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:39.384002 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:39.884446 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:40.384836 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:40.884174 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:41.384454 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:41.884895 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:42.384761 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:42.884005 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:43.384445 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:43.884790 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:44.385477 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:44.884423 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:45.384805 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:45.884834 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:46.384287 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:46.884479 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:47.384974 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:47.884054 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:48.384194 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:48.884839 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:49.384278 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:49.884538 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:50.384278 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:50.884579 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:51.385101 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:51.884486 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:52.384887 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:52.884070 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:53.384026 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:53.883911 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:54.384322 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:54.884672 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:55.384365 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:55.884766 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:56.384104 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:56.885278 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:57.384983 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:57.884257 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:58.384614 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:58.884883 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:59.384254 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:52:59.884584 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:00.384328 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:00.884834 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:01.384017 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:01.883999 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:02.384350 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:02.884972 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:03.386265 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:03.884628 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:04.383894 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:04.884006 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:05.384622 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:05.884056 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:06.384828 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:06.883958 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:07.385634 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:07.884766 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:08.384236 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:08.884596 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:09.385032 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:09.884441 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:10.384817 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:10.884330 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:11.384578 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:11.884816 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:12.384226 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:12.884433 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:13.384861 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:13.884125 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:14.384510 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:14.884889 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:15.384343 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:15.884846 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:16.385012 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:16.885056 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:17.384312 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:17.884595 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:18.384882 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:18.884191 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:19.384746 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:19.884284 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:20.384802 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:20.884153 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:21.384446 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:21.884875 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:22.384627 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:22.883912 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:23.384165 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:23.884507 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:24.385441 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:24.884901 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:25.384677 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:25.885027 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:26.384905 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:26.884387 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:27.384876 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:27.884062 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:28.384246 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:28.884465 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:29.384837 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:29.884249 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:30.384850 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:30.884118 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:31.384637 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:31.884871 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:32.384192 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:32.884409 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:33.384864 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:33.884676 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:34.383910 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:34.883951 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:35.384409 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:35.884949 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:36.384583 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:36.884840 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:37.384199 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:37.884608 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:38.383996 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:38.884465 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:39.385234 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:39.884662 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:40.384184 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:40.884488 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:41.384839 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:41.884125 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:42.384522 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:42.885450 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:43.385297 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:43.884676 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:44.384029 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:44.885204 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:45.384931 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:45.885012 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:46.384006 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:46.884610 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:47.385011 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:47.884309 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:48.384821 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:48.884570 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:49.384922 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:49.884713 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:50.384219 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:50.884663 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:51.385015 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:51.884380 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:52.384862 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:52.891201 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:53.384485 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:53.884898 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:54.384905 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:54.884298 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:55.385208 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:55.884607 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:56.385464 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:56.884256 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:57.384950 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:57.883930 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:58.384391 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:58.884956 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:59.384364 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:53:59.884780 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:00.384048 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:00.883901 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:01.384255 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:01.886100 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:02.383929 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:02.884300 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:03.384583 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:03.884837 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:04.384977 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:04.884234 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:05.384631 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:05.884177 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:06.385150 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:06.884433 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:07.384794 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:07.884088 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:08.384027 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:08.884360 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:09.384452 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:09.884914 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:10.386900 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:10.883961 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:11.384353 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:11.884710 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:12.384300 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:12.884937 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:13.384114 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:13.884653 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:14.384378 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:14.885046 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:15.384751 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:15.884038 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:16.384725 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:16.883892 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:17.384078 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:17.884435 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:18.384888 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:18.884192 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:19.385016 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:19.884056 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:20.384709 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:20.884063 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:21.384323 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:21.884775 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:22.384162 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:22.884363 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:23.384688 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:23.885384 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:24.384975 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:24.883989 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:25.384652 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:25.886402 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:26.384833 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:26.885081 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:27.384375 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:27.884826 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:28.384007 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:28.884303 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:29.384982 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:29.884339 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:30.384862 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:30.884123 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:31.384377 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:31.884719 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:32.384282 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:32.884837 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:33.384088 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:33.884262 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:34.384685 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:34.883957 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:35.384856 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:35.884064 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:36.385405 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:36.884759 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:37.383972 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:37.884213 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:38.384497 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:38.884815 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:39.384147 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:39.884692 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:40.384050 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:40.884758 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:41.384854 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:41.884004 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:42.384030 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:42.883927 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:43.384209 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:43.884699 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:44.384083 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:44.884461 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:45.384191 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:45.884666 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:46.386244 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:46.884790 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:47.384674 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:47.884029 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:48.384312 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:48.884828 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:49.384024 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:49.884011 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:50.384240 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:50.884631 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:51.384788 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:51.885263 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:52.384937 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:52.884614 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:53.385093 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:53.884166 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:54.384695 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:54.884980 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:55.384724 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:55.884051 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:56.384702 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:56.884034 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:57.384402 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:57.886701 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:58.384179 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:58.884525 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:59.385358 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:54:59.884730 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:00.384424 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:00.884813 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:01.385164 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:01.884916 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:02.384268 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:02.884642 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:03.384160 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:03.884082 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:04.384133 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:04.884898 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:05.385494 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:05.885058 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:06.384690 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:06.884349 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:07.384759 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:07.884662 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:08.385604 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:08.884154 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:09.384840 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:09.884323 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:10.384905 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:10.884296 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:11.384502 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:11.884961 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:12.384222 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:12.886533 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:13.384692 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:13.884167 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:14.384299 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:14.884387 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:15.384713 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:15.884219 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:16.384721 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:16.884666 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:17.384045 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:17.884679 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:18.384037 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:18.884048 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:19.384393 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:19.884816 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:20.384347 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:20.884506 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:21.384708 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:21.883879 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:22.384064 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:22.884234 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:23.384639 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:23.884213 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:24.384490 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:24.884705 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:25.384082 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:25.884381 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:26.384923 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:26.884278 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:27.384756 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:27.884170 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:28.384760 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:28.884157 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:29.384515 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:29.884848 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:30.384104 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:30.884291 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:31.384814 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:31.884122 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:32.384472 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:32.884452 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:33.386621 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:33.885235 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:34.384844 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:34.884153 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:35.384702 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:35.884211 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:36.384660 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:36.883908 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:37.384207 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:37.884397 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:38.384676 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:38.884756 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:39.384878 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:39.884310 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:40.384965 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:40.884410 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:41.385005 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:41.884343 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:42.385138 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:42.884490 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:43.385057 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:43.884971 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:44.384282 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:44.884509 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:45.384296 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:45.884856 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:46.384321 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:46.884574 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:47.385079 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:47.884762 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:48.383959 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:48.884856 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:49.383953 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:49.884306 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:50.384928 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:50.883967 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:51.384577 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:51.886618 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:52.383833 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:52.884397 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:53.385014 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:53.883882 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:54.384013 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:54.884505 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:55.384171 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:55.884583 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:56.384825 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:56.883963 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:57.383931 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:57.884018 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:58.383932 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:58.884073 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:59.384026 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:55:59.884183 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:56:00.384958 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:56:00.884517 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:56:01.385117 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:56:01.883875 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:56:02.383998 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:56:02.884210 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:56:03.384740 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:56:03.884687 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:56:04.385054 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:56:04.884037 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:56:05.384696 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:56:05.884689 1223285 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1114 13:56:06.347207 1223285 kapi.go:107] duration metric: took 6m0.000637905s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1114 13:56:06.349418 1223285 out.go:177] 
	W1114 13:56:06.351307 1223285 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	W1114 13:56:06.351324 1223285 out.go:239] * 
	* 
	W1114 13:56:06.358172 1223285 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1114 13:56:06.360301 1223285 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-814110
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-814110:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9a9e7aa0fbf6df369770f5cba56640f8d24be3ea2a261d6f6825a0ff065ba7a4",
	        "Created": "2023-11-14T13:48:57.286856822Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1220737,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-14T13:48:57.611502817Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:977f9df3a3e2dccc16de7b5e8115e5e1294a98b99d56135cce7538135e7a7a9d",
	        "ResolvConfPath": "/var/lib/docker/containers/9a9e7aa0fbf6df369770f5cba56640f8d24be3ea2a261d6f6825a0ff065ba7a4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9a9e7aa0fbf6df369770f5cba56640f8d24be3ea2a261d6f6825a0ff065ba7a4/hostname",
	        "HostsPath": "/var/lib/docker/containers/9a9e7aa0fbf6df369770f5cba56640f8d24be3ea2a261d6f6825a0ff065ba7a4/hosts",
	        "LogPath": "/var/lib/docker/containers/9a9e7aa0fbf6df369770f5cba56640f8d24be3ea2a261d6f6825a0ff065ba7a4/9a9e7aa0fbf6df369770f5cba56640f8d24be3ea2a261d6f6825a0ff065ba7a4-json.log",
	        "Name": "/ingress-addon-legacy-814110",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-814110:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-814110",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0b5abde21255f48f67b79dc09304078df88614923d36dcb4ae5f6f95af6a1bf3-init/diff:/var/lib/docker/overlay2/ad9b1528ccc99a2a23c8205d781cfd6ce01aa0662a87aad99178910b13bfc77f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0b5abde21255f48f67b79dc09304078df88614923d36dcb4ae5f6f95af6a1bf3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0b5abde21255f48f67b79dc09304078df88614923d36dcb4ae5f6f95af6a1bf3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0b5abde21255f48f67b79dc09304078df88614923d36dcb4ae5f6f95af6a1bf3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-814110",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-814110/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-814110",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-814110",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-814110",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e57fb3d18afa63c048b8eaa5b7da18b8c5559c45dfce92857f22cf94e60de464",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34294"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34293"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34290"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34292"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34291"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e57fb3d18afa",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-814110": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9a9e7aa0fbf6",
	                        "ingress-addon-legacy-814110"
	                    ],
	                    "NetworkID": "0cb255c5f52bb06a15e0868a02b7512aefce6a3c8849e0c9aedb587574294a74",
	                    "EndpointID": "9509e8fb1672e2192e1868fb502423b5c96afc625e0fdb408f57c8750c3abe55",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-814110 -n ingress-addon-legacy-814110
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddonActivation FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-814110 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-814110 logs -n 25: (1.462748513s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image          | functional-943397 image rm                                             | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:47 UTC | 14 Nov 23 13:47 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-943397               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-943397 image ls                                             | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:47 UTC | 14 Nov 23 13:47 UTC |
	| image          | functional-943397 image load                                           | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:47 UTC | 14 Nov 23 13:47 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-943397 image ls                                             | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:47 UTC | 14 Nov 23 13:47 UTC |
	| image          | functional-943397 image save --daemon                                  | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:47 UTC | 14 Nov 23 13:48 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-943397               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /etc/test/nested/copy/1191690/hosts                                    |                             |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /etc/ssl/certs/1191690.pem                                             |                             |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /usr/share/ca-certificates/1191690.pem                                 |                             |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /etc/ssl/certs/51391683.0                                              |                             |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /etc/ssl/certs/11916902.pem                                            |                             |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /usr/share/ca-certificates/11916902.pem                                |                             |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                              |                             |         |         |                     |                     |
	| image          | functional-943397                                                      | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-943397                                                      | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-943397 ssh pgrep                                            | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-943397 image build -t                                       | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | localhost/my-image:functional-943397                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-943397 image ls                                             | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	| image          | functional-943397                                                      | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-943397                                                      | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| update-context | functional-943397                                                      | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-943397                                                      | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-943397                                                      | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| delete         | -p functional-943397                                                   | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	| start          | -p ingress-addon-legacy-814110                                         | ingress-addon-legacy-814110 | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:50 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-814110                                            | ingress-addon-legacy-814110 | jenkins | v1.32.0 | 14 Nov 23 13:50 UTC |                     |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 13:48:33
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 13:48:33.163611 1220278 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:48:33.163759 1220278 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:48:33.163769 1220278 out.go:309] Setting ErrFile to fd 2...
	I1114 13:48:33.163775 1220278 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:48:33.164044 1220278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
	I1114 13:48:33.164462 1220278 out.go:303] Setting JSON to false
	I1114 13:48:33.165459 1220278 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":37860,"bootTime":1699931854,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1114 13:48:33.165538 1220278 start.go:138] virtualization:  
	I1114 13:48:33.167856 1220278 out.go:177] * [ingress-addon-legacy-814110] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1114 13:48:33.170172 1220278 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 13:48:33.172028 1220278 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 13:48:33.170351 1220278 notify.go:220] Checking for updates...
	I1114 13:48:33.173835 1220278 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 13:48:33.175934 1220278 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	I1114 13:48:33.177801 1220278 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1114 13:48:33.179955 1220278 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 13:48:33.182416 1220278 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 13:48:33.206511 1220278 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1114 13:48:33.206618 1220278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:48:33.288027 1220278 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-14 13:48:33.277409035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:48:33.288152 1220278 docker.go:295] overlay module found
	I1114 13:48:33.290460 1220278 out.go:177] * Using the docker driver based on user configuration
	I1114 13:48:33.292469 1220278 start.go:298] selected driver: docker
	I1114 13:48:33.292486 1220278 start.go:902] validating driver "docker" against <nil>
	I1114 13:48:33.292507 1220278 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 13:48:33.293358 1220278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:48:33.360617 1220278 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-14 13:48:33.351179765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:48:33.360763 1220278 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1114 13:48:33.360990 1220278 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1114 13:48:33.363011 1220278 out.go:177] * Using Docker driver with root privileges
	I1114 13:48:33.364965 1220278 cni.go:84] Creating CNI manager for ""
	I1114 13:48:33.364986 1220278 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1114 13:48:33.364997 1220278 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1114 13:48:33.365014 1220278 start_flags.go:323] config:
	{Name:ingress-addon-legacy-814110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-814110 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:48:33.367454 1220278 out.go:177] * Starting control plane node ingress-addon-legacy-814110 in cluster ingress-addon-legacy-814110
	I1114 13:48:33.369323 1220278 cache.go:121] Beginning downloading kic base image for docker with crio
	I1114 13:48:33.371429 1220278 out.go:177] * Pulling base image ...
	I1114 13:48:33.373447 1220278 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1114 13:48:33.373539 1220278 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon
	I1114 13:48:33.390734 1220278 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon, skipping pull
	I1114 13:48:33.390763 1220278 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 exists in daemon, skipping load
	I1114 13:48:33.448949 1220278 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1114 13:48:33.448971 1220278 cache.go:56] Caching tarball of preloaded images
	I1114 13:48:33.449158 1220278 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1114 13:48:33.451487 1220278 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1114 13:48:33.453873 1220278 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1114 13:48:33.577611 1220278 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1114 13:48:49.379495 1220278 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1114 13:48:49.379606 1220278 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1114 13:48:50.571379 1220278 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1114 13:48:50.571766 1220278 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/config.json ...
	I1114 13:48:50.571803 1220278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/config.json: {Name:mkf4549114595f007cb9c1dcef1d85ffa7059f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:48:50.571992 1220278 cache.go:194] Successfully downloaded all kic artifacts
	I1114 13:48:50.572018 1220278 start.go:365] acquiring machines lock for ingress-addon-legacy-814110: {Name:mkca719b9d5f584c15d6e33b8df461d5b71aacdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 13:48:50.572084 1220278 start.go:369] acquired machines lock for "ingress-addon-legacy-814110" in 53.152µs
	I1114 13:48:50.572105 1220278 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-814110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-814110 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 13:48:50.572181 1220278 start.go:125] createHost starting for "" (driver="docker")
	I1114 13:48:50.574720 1220278 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1114 13:48:50.574952 1220278 start.go:159] libmachine.API.Create for "ingress-addon-legacy-814110" (driver="docker")
	I1114 13:48:50.574996 1220278 client.go:168] LocalClient.Create starting
	I1114 13:48:50.575067 1220278 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem
	I1114 13:48:50.575102 1220278 main.go:141] libmachine: Decoding PEM data...
	I1114 13:48:50.575122 1220278 main.go:141] libmachine: Parsing certificate...
	I1114 13:48:50.575184 1220278 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem
	I1114 13:48:50.575207 1220278 main.go:141] libmachine: Decoding PEM data...
	I1114 13:48:50.575222 1220278 main.go:141] libmachine: Parsing certificate...
	I1114 13:48:50.575589 1220278 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-814110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1114 13:48:50.593896 1220278 cli_runner.go:211] docker network inspect ingress-addon-legacy-814110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1114 13:48:50.593986 1220278 network_create.go:281] running [docker network inspect ingress-addon-legacy-814110] to gather additional debugging logs...
	I1114 13:48:50.594007 1220278 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-814110
	W1114 13:48:50.611187 1220278 cli_runner.go:211] docker network inspect ingress-addon-legacy-814110 returned with exit code 1
	I1114 13:48:50.611221 1220278 network_create.go:284] error running [docker network inspect ingress-addon-legacy-814110]: docker network inspect ingress-addon-legacy-814110: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-814110 not found
	I1114 13:48:50.611237 1220278 network_create.go:286] output of [docker network inspect ingress-addon-legacy-814110]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-814110 not found
	
	** /stderr **
	I1114 13:48:50.611339 1220278 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1114 13:48:50.630255 1220278 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400044b930}
	I1114 13:48:50.630298 1220278 network_create.go:124] attempt to create docker network ingress-addon-legacy-814110 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1114 13:48:50.630358 1220278 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-814110 ingress-addon-legacy-814110
	I1114 13:48:50.709089 1220278 network_create.go:108] docker network ingress-addon-legacy-814110 192.168.49.0/24 created
	I1114 13:48:50.709123 1220278 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-814110" container
	I1114 13:48:50.709209 1220278 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1114 13:48:50.725637 1220278 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-814110 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-814110 --label created_by.minikube.sigs.k8s.io=true
	I1114 13:48:50.743823 1220278 oci.go:103] Successfully created a docker volume ingress-addon-legacy-814110
	I1114 13:48:50.743909 1220278 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-814110-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-814110 --entrypoint /usr/bin/test -v ingress-addon-legacy-814110:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -d /var/lib
	I1114 13:48:52.293228 1220278 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-814110-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-814110 --entrypoint /usr/bin/test -v ingress-addon-legacy-814110:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -d /var/lib: (1.549274169s)
	I1114 13:48:52.293263 1220278 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-814110
	I1114 13:48:52.293277 1220278 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1114 13:48:52.293298 1220278 kic.go:194] Starting extracting preloaded images to volume ...
	I1114 13:48:52.293397 1220278 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-814110:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir
	I1114 13:48:57.204649 1220278 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-814110:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir: (4.911209557s)
	I1114 13:48:57.204691 1220278 kic.go:203] duration metric: took 4.911390 seconds to extract preloaded images to volume
	W1114 13:48:57.204834 1220278 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1114 13:48:57.204968 1220278 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1114 13:48:57.270920 1220278 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-814110 --name ingress-addon-legacy-814110 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-814110 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-814110 --network ingress-addon-legacy-814110 --ip 192.168.49.2 --volume ingress-addon-legacy-814110:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24
	I1114 13:48:57.620677 1220278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-814110 --format={{.State.Running}}
	I1114 13:48:57.642989 1220278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-814110 --format={{.State.Status}}
	I1114 13:48:57.666636 1220278 cli_runner.go:164] Run: docker exec ingress-addon-legacy-814110 stat /var/lib/dpkg/alternatives/iptables
	I1114 13:48:57.732348 1220278 oci.go:144] the created container "ingress-addon-legacy-814110" has a running status.
	I1114 13:48:57.732375 1220278 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa...
	I1114 13:48:58.653466 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1114 13:48:58.653512 1220278 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1114 13:48:58.677504 1220278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-814110 --format={{.State.Status}}
	I1114 13:48:58.701064 1220278 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1114 13:48:58.701087 1220278 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-814110 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1114 13:48:58.768205 1220278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-814110 --format={{.State.Status}}
	I1114 13:48:58.787416 1220278 machine.go:88] provisioning docker machine ...
	I1114 13:48:58.787455 1220278 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-814110"
	I1114 13:48:58.787518 1220278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:48:58.815016 1220278 main.go:141] libmachine: Using SSH client type: native
	I1114 13:48:58.815578 1220278 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34294 <nil> <nil>}
	I1114 13:48:58.815598 1220278 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-814110 && echo "ingress-addon-legacy-814110" | sudo tee /etc/hostname
	I1114 13:48:58.980043 1220278 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-814110
	
	I1114 13:48:58.980193 1220278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:48:58.999841 1220278 main.go:141] libmachine: Using SSH client type: native
	I1114 13:48:59.000248 1220278 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34294 <nil> <nil>}
	I1114 13:48:59.000267 1220278 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-814110' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-814110/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-814110' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 13:48:59.141895 1220278 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 13:48:59.141926 1220278 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17581-1186318/.minikube CaCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17581-1186318/.minikube}
	I1114 13:48:59.141945 1220278 ubuntu.go:177] setting up certificates
	I1114 13:48:59.141953 1220278 provision.go:83] configureAuth start
	I1114 13:48:59.142013 1220278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-814110
	I1114 13:48:59.159522 1220278 provision.go:138] copyHostCerts
	I1114 13:48:59.159563 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem
	I1114 13:48:59.159593 1220278 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem, removing ...
	I1114 13:48:59.159599 1220278 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem
	I1114 13:48:59.159679 1220278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem (1082 bytes)
	I1114 13:48:59.159755 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem
	I1114 13:48:59.159772 1220278 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem, removing ...
	I1114 13:48:59.159776 1220278 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem
	I1114 13:48:59.159800 1220278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem (1123 bytes)
	I1114 13:48:59.159836 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem
	I1114 13:48:59.159851 1220278 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem, removing ...
	I1114 13:48:59.159855 1220278 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem
	I1114 13:48:59.159878 1220278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem (1675 bytes)
	I1114 13:48:59.159918 1220278 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-814110 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-814110]
	I1114 13:48:59.314222 1220278 provision.go:172] copyRemoteCerts
	I1114 13:48:59.314295 1220278 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 13:48:59.314341 1220278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:48:59.332186 1220278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa Username:docker}
	I1114 13:48:59.430993 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1114 13:48:59.431113 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 13:48:59.458858 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1114 13:48:59.458922 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 13:48:59.486759 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1114 13:48:59.486820 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1114 13:48:59.514330 1220278 provision.go:86] duration metric: configureAuth took 372.36419ms
	I1114 13:48:59.514356 1220278 ubuntu.go:193] setting minikube options for container-runtime
	I1114 13:48:59.514553 1220278 config.go:182] Loaded profile config "ingress-addon-legacy-814110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1114 13:48:59.514658 1220278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:48:59.532988 1220278 main.go:141] libmachine: Using SSH client type: native
	I1114 13:48:59.533423 1220278 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34294 <nil> <nil>}
	I1114 13:48:59.533459 1220278 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 13:48:59.812659 1220278 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 13:48:59.812721 1220278 machine.go:91] provisioned docker machine in 1.025281287s
	I1114 13:48:59.812738 1220278 client.go:171] LocalClient.Create took 9.237730526s
	I1114 13:48:59.812761 1220278 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-814110" took 9.237810526s
	I1114 13:48:59.812771 1220278 start.go:300] post-start starting for "ingress-addon-legacy-814110" (driver="docker")
	I1114 13:48:59.812793 1220278 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 13:48:59.812862 1220278 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 13:48:59.812935 1220278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:48:59.831211 1220278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa Username:docker}
	I1114 13:48:59.931393 1220278 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 13:48:59.935474 1220278 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1114 13:48:59.935513 1220278 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1114 13:48:59.935525 1220278 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1114 13:48:59.935533 1220278 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1114 13:48:59.935544 1220278 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1186318/.minikube/addons for local assets ...
	I1114 13:48:59.935601 1220278 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1186318/.minikube/files for local assets ...
	I1114 13:48:59.935693 1220278 filesync.go:149] local asset: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem -> 11916902.pem in /etc/ssl/certs
	I1114 13:48:59.935705 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem -> /etc/ssl/certs/11916902.pem
	I1114 13:48:59.935815 1220278 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 13:48:59.945912 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem --> /etc/ssl/certs/11916902.pem (1708 bytes)
	I1114 13:48:59.974335 1220278 start.go:303] post-start completed in 161.549177ms
	I1114 13:48:59.974693 1220278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-814110
	I1114 13:48:59.994163 1220278 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/config.json ...
	I1114 13:48:59.994429 1220278 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 13:48:59.994469 1220278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:49:00.017149 1220278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa Username:docker}
	I1114 13:49:00.159810 1220278 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1114 13:49:00.166368 1220278 start.go:128] duration metric: createHost completed in 9.594170348s
	I1114 13:49:00.166398 1220278 start.go:83] releasing machines lock for "ingress-addon-legacy-814110", held for 9.594302294s
	I1114 13:49:00.166487 1220278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-814110
	I1114 13:49:00.186508 1220278 ssh_runner.go:195] Run: cat /version.json
	I1114 13:49:00.186559 1220278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:49:00.187030 1220278 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 13:49:00.187113 1220278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:49:00.208217 1220278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa Username:docker}
	I1114 13:49:00.209930 1220278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa Username:docker}
	I1114 13:49:00.447439 1220278 ssh_runner.go:195] Run: systemctl --version
	I1114 13:49:00.453064 1220278 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 13:49:00.598986 1220278 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1114 13:49:00.604643 1220278 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 13:49:00.628139 1220278 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1114 13:49:00.628233 1220278 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 13:49:00.668346 1220278 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1114 13:49:00.668369 1220278 start.go:472] detecting cgroup driver to use...
	I1114 13:49:00.668401 1220278 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1114 13:49:00.668454 1220278 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 13:49:00.687681 1220278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 13:49:00.701225 1220278 docker.go:203] disabling cri-docker service (if available) ...
	I1114 13:49:00.701291 1220278 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 13:49:00.718209 1220278 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 13:49:00.735179 1220278 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 13:49:00.831178 1220278 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 13:49:00.940177 1220278 docker.go:219] disabling docker service ...
	I1114 13:49:00.940248 1220278 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 13:49:00.962065 1220278 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 13:49:00.976509 1220278 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 13:49:01.079979 1220278 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 13:49:01.179055 1220278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 13:49:01.193605 1220278 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 13:49:01.215098 1220278 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1114 13:49:01.215173 1220278 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 13:49:01.227266 1220278 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 13:49:01.227343 1220278 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 13:49:01.239746 1220278 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 13:49:01.251847 1220278 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 13:49:01.263913 1220278 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 13:49:01.276506 1220278 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 13:49:01.287691 1220278 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 13:49:01.297957 1220278 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 13:49:01.388403 1220278 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 13:49:01.508184 1220278 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 13:49:01.508268 1220278 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 13:49:01.513028 1220278 start.go:540] Will wait 60s for crictl version
	I1114 13:49:01.513092 1220278 ssh_runner.go:195] Run: which crictl
	I1114 13:49:01.517480 1220278 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 13:49:01.557641 1220278 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1114 13:49:01.557729 1220278 ssh_runner.go:195] Run: crio --version
	I1114 13:49:01.603554 1220278 ssh_runner.go:195] Run: crio --version
	I1114 13:49:01.649431 1220278 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1114 13:49:01.651195 1220278 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-814110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1114 13:49:01.669189 1220278 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1114 13:49:01.674102 1220278 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 13:49:01.687448 1220278 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1114 13:49:01.687520 1220278 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 13:49:01.741636 1220278 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1114 13:49:01.741712 1220278 ssh_runner.go:195] Run: which lz4
	I1114 13:49:01.746209 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1114 13:49:01.746303 1220278 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 13:49:01.750850 1220278 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 13:49:01.750888 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1114 13:49:03.850909 1220278 crio.go:444] Took 2.104634 seconds to copy over tarball
	I1114 13:49:03.850990 1220278 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 13:49:06.743110 1220278 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.89208964s)
	I1114 13:49:06.743136 1220278 crio.go:451] Took 2.892203 seconds to extract the tarball
	I1114 13:49:06.743146 1220278 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 13:49:06.830706 1220278 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 13:49:06.873060 1220278 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1114 13:49:06.873087 1220278 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1114 13:49:06.873165 1220278 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 13:49:06.873354 1220278 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1114 13:49:06.873494 1220278 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1114 13:49:06.873577 1220278 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1114 13:49:06.873647 1220278 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1114 13:49:06.873727 1220278 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1114 13:49:06.873797 1220278 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1114 13:49:06.873870 1220278 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1114 13:49:06.876718 1220278 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1114 13:49:06.877232 1220278 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1114 13:49:06.877263 1220278 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1114 13:49:06.877338 1220278 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1114 13:49:06.877428 1220278 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1114 13:49:06.877513 1220278 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1114 13:49:06.877556 1220278 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 13:49:06.877599 1220278 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	W1114 13:49:07.229169 1220278 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1114 13:49:07.229510 1220278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W1114 13:49:07.238709 1220278 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1114 13:49:07.238889 1220278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W1114 13:49:07.247280 1220278 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1114 13:49:07.247527 1220278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1114 13:49:07.262067 1220278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1114 13:49:07.274945 1220278 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1114 13:49:07.275127 1220278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W1114 13:49:07.288947 1220278 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1114 13:49:07.289175 1220278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1114 13:49:07.318275 1220278 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1114 13:49:07.318367 1220278 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1114 13:49:07.318447 1220278 ssh_runner.go:195] Run: which crictl
	W1114 13:49:07.333558 1220278 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1114 13:49:07.333752 1220278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1114 13:49:07.392433 1220278 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1114 13:49:07.392478 1220278 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1114 13:49:07.392531 1220278 ssh_runner.go:195] Run: which crictl
	W1114 13:49:07.405246 1220278 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1114 13:49:07.405418 1220278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 13:49:07.464582 1220278 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1114 13:49:07.464650 1220278 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1114 13:49:07.464710 1220278 ssh_runner.go:195] Run: which crictl
	I1114 13:49:07.464808 1220278 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1114 13:49:07.464825 1220278 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1114 13:49:07.464847 1220278 ssh_runner.go:195] Run: which crictl
	I1114 13:49:07.464926 1220278 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1114 13:49:07.464947 1220278 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1114 13:49:07.464980 1220278 ssh_runner.go:195] Run: which crictl
	I1114 13:49:07.500619 1220278 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1114 13:49:07.500673 1220278 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1114 13:49:07.500724 1220278 ssh_runner.go:195] Run: which crictl
	I1114 13:49:07.500727 1220278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1114 13:49:07.500847 1220278 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1114 13:49:07.500866 1220278 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1114 13:49:07.500937 1220278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1114 13:49:07.501003 1220278 ssh_runner.go:195] Run: which crictl
	I1114 13:49:07.638271 1220278 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1114 13:49:07.638321 1220278 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 13:49:07.638369 1220278 ssh_runner.go:195] Run: which crictl
	I1114 13:49:07.638514 1220278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1114 13:49:07.638574 1220278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1114 13:49:07.638620 1220278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1114 13:49:07.638715 1220278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1114 13:49:07.638730 1220278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1114 13:49:07.638803 1220278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1114 13:49:07.638772 1220278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1114 13:49:07.751682 1220278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1114 13:49:07.751799 1220278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 13:49:07.751899 1220278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1114 13:49:07.766106 1220278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1114 13:49:07.766224 1220278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1114 13:49:07.766301 1220278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1114 13:49:07.838701 1220278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1114 13:49:07.838782 1220278 cache_images.go:92] LoadImages completed in 965.674802ms
	W1114 13:49:07.838842 1220278 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I1114 13:49:07.838914 1220278 ssh_runner.go:195] Run: crio config
	I1114 13:49:07.894531 1220278 cni.go:84] Creating CNI manager for ""
	I1114 13:49:07.894574 1220278 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1114 13:49:07.894604 1220278 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 13:49:07.894664 1220278 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-814110 NodeName:ingress-addon-legacy-814110 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1114 13:49:07.894798 1220278 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-814110"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 13:49:07.894885 1220278 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-814110 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-814110 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 13:49:07.894953 1220278 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1114 13:49:07.905621 1220278 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 13:49:07.905694 1220278 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 13:49:07.916022 1220278 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1114 13:49:07.937025 1220278 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1114 13:49:07.958227 1220278 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1114 13:49:07.979520 1220278 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1114 13:49:07.983965 1220278 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 13:49:07.997334 1220278 certs.go:56] Setting up /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110 for IP: 192.168.49.2
	I1114 13:49:07.997377 1220278 certs.go:190] acquiring lock for shared ca certs: {Name:mk1fdfc415c611904fd8e5ce757e79f4579c67a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:49:07.997520 1220278 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.key
	I1114 13:49:07.997564 1220278 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.key
	I1114 13:49:07.997618 1220278 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.key
	I1114 13:49:07.997634 1220278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt with IP's: []
	I1114 13:49:08.360921 1220278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt ...
	I1114 13:49:08.360952 1220278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: {Name:mk6aaf376189cad2f2c54a2b6881f0f572fb195a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:49:08.361153 1220278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.key ...
	I1114 13:49:08.361167 1220278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.key: {Name:mk5399a276929781a659189f8c0cd95f19bc1b96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:49:08.361261 1220278 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.key.dd3b5fb2
	I1114 13:49:08.361283 1220278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1114 13:49:08.698870 1220278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.crt.dd3b5fb2 ...
	I1114 13:49:08.698902 1220278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.crt.dd3b5fb2: {Name:mk334d21a70de7c95384406f57c5243244717ac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:49:08.699081 1220278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.key.dd3b5fb2 ...
	I1114 13:49:08.699096 1220278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.key.dd3b5fb2: {Name:mkaf1c9b70dc70e5b6a7838ce04b084c33d9b760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:49:08.699178 1220278 certs.go:337] copying /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.crt
	I1114 13:49:08.699251 1220278 certs.go:341] copying /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.key
	I1114 13:49:08.699307 1220278 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/proxy-client.key
	I1114 13:49:08.699326 1220278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/proxy-client.crt with IP's: []
	I1114 13:49:09.420809 1220278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/proxy-client.crt ...
	I1114 13:49:09.420847 1220278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/proxy-client.crt: {Name:mk737756b64470a7dceadb60be43ad81273a87d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:49:09.421026 1220278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/proxy-client.key ...
	I1114 13:49:09.421040 1220278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/proxy-client.key: {Name:mk1d86c049a616a7ff17b9b6b5b2d42ca4cc6afb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:49:09.421126 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1114 13:49:09.421146 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1114 13:49:09.421161 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1114 13:49:09.421177 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1114 13:49:09.421189 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1114 13:49:09.421205 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1114 13:49:09.421219 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1114 13:49:09.421236 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1114 13:49:09.421286 1220278 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/1191690.pem (1338 bytes)
	W1114 13:49:09.421334 1220278 certs.go:433] ignoring /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/1191690_empty.pem, impossibly tiny 0 bytes
	I1114 13:49:09.421355 1220278 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 13:49:09.421385 1220278 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem (1082 bytes)
	I1114 13:49:09.421416 1220278 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem (1123 bytes)
	I1114 13:49:09.421442 1220278 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem (1675 bytes)
	I1114 13:49:09.421492 1220278 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem (1708 bytes)
	I1114 13:49:09.421528 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:49:09.421543 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/1191690.pem -> /usr/share/ca-certificates/1191690.pem
	I1114 13:49:09.421556 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem -> /usr/share/ca-certificates/11916902.pem
	I1114 13:49:09.422158 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 13:49:09.450584 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1114 13:49:09.479444 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 13:49:09.508035 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1114 13:49:09.535751 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 13:49:09.564139 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 13:49:09.591183 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 13:49:09.619976 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1114 13:49:09.648899 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 13:49:09.677721 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/1191690.pem --> /usr/share/ca-certificates/1191690.pem (1338 bytes)
	I1114 13:49:09.706386 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem --> /usr/share/ca-certificates/11916902.pem (1708 bytes)
	I1114 13:49:09.735130 1220278 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 13:49:09.756758 1220278 ssh_runner.go:195] Run: openssl version
	I1114 13:49:09.765087 1220278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 13:49:09.778185 1220278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:49:09.783278 1220278 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 13:34 /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:49:09.783363 1220278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:49:09.794375 1220278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 13:49:09.806982 1220278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1191690.pem && ln -fs /usr/share/ca-certificates/1191690.pem /etc/ssl/certs/1191690.pem"
	I1114 13:49:09.818434 1220278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1191690.pem
	I1114 13:49:09.823781 1220278 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 13:42 /usr/share/ca-certificates/1191690.pem
	I1114 13:49:09.823853 1220278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1191690.pem
	I1114 13:49:09.833064 1220278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1191690.pem /etc/ssl/certs/51391683.0"
	I1114 13:49:09.845304 1220278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11916902.pem && ln -fs /usr/share/ca-certificates/11916902.pem /etc/ssl/certs/11916902.pem"
	I1114 13:49:09.857837 1220278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11916902.pem
	I1114 13:49:09.862711 1220278 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 13:42 /usr/share/ca-certificates/11916902.pem
	I1114 13:49:09.862776 1220278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11916902.pem
	I1114 13:49:09.871581 1220278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11916902.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 13:49:09.883105 1220278 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 13:49:09.887405 1220278 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 13:49:09.887468 1220278 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-814110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-814110 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:49:09.887542 1220278 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 13:49:09.887604 1220278 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 13:49:09.928978 1220278 cri.go:89] found id: ""
	I1114 13:49:09.929056 1220278 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 13:49:09.940170 1220278 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 13:49:09.951547 1220278 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1114 13:49:09.951642 1220278 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 13:49:09.962910 1220278 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 13:49:09.962968 1220278 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1114 13:49:10.023165 1220278 kubeadm.go:322] W1114 13:49:10.022631    1229 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1114 13:49:10.077494 1220278 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1114 13:49:10.168024 1220278 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 13:49:17.901095 1220278 kubeadm.go:322] W1114 13:49:17.899123    1229 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1114 13:49:17.901562 1220278 kubeadm.go:322] W1114 13:49:17.900877    1229 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1114 13:49:32.903401 1220278 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1114 13:49:32.903455 1220278 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 13:49:32.903537 1220278 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1114 13:49:32.903588 1220278 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1114 13:49:32.903632 1220278 kubeadm.go:322] OS: Linux
	I1114 13:49:32.903674 1220278 kubeadm.go:322] CGROUPS_CPU: enabled
	I1114 13:49:32.903719 1220278 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1114 13:49:32.903763 1220278 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1114 13:49:32.903822 1220278 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1114 13:49:32.903868 1220278 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1114 13:49:32.903912 1220278 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1114 13:49:32.903979 1220278 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 13:49:32.904066 1220278 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 13:49:32.904151 1220278 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 13:49:32.904246 1220278 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 13:49:32.904324 1220278 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 13:49:32.904361 1220278 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 13:49:32.904426 1220278 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 13:49:32.906744 1220278 out.go:204]   - Generating certificates and keys ...
	I1114 13:49:32.906826 1220278 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 13:49:32.906895 1220278 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 13:49:32.906968 1220278 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1114 13:49:32.907028 1220278 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1114 13:49:32.907087 1220278 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1114 13:49:32.907138 1220278 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1114 13:49:32.907193 1220278 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1114 13:49:32.907320 1220278 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-814110 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1114 13:49:32.907373 1220278 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1114 13:49:32.907511 1220278 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-814110 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1114 13:49:32.907577 1220278 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1114 13:49:32.907648 1220278 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1114 13:49:32.907701 1220278 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1114 13:49:32.907763 1220278 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 13:49:32.907819 1220278 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 13:49:32.907872 1220278 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 13:49:32.907938 1220278 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 13:49:32.907992 1220278 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 13:49:32.908056 1220278 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 13:49:32.911112 1220278 out.go:204]   - Booting up control plane ...
	I1114 13:49:32.911286 1220278 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 13:49:32.911366 1220278 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 13:49:32.911433 1220278 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 13:49:32.911514 1220278 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 13:49:32.911687 1220278 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 13:49:32.911765 1220278 kubeadm.go:322] [apiclient] All control plane components are healthy after 13.502366 seconds
	I1114 13:49:32.911870 1220278 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 13:49:32.911998 1220278 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 13:49:32.912056 1220278 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 13:49:32.912189 1220278 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-814110 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1114 13:49:32.912245 1220278 kubeadm.go:322] [bootstrap-token] Using token: exlcvv.ck94ijhdfq1109uo
	I1114 13:49:32.914773 1220278 out.go:204]   - Configuring RBAC rules ...
	I1114 13:49:32.914901 1220278 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 13:49:32.914986 1220278 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 13:49:32.915125 1220278 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 13:49:32.915251 1220278 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 13:49:32.915365 1220278 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 13:49:32.915449 1220278 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 13:49:32.915563 1220278 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 13:49:32.915631 1220278 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 13:49:32.915677 1220278 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 13:49:32.915681 1220278 kubeadm.go:322] 
	I1114 13:49:32.915741 1220278 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 13:49:32.915746 1220278 kubeadm.go:322] 
	I1114 13:49:32.915822 1220278 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 13:49:32.915826 1220278 kubeadm.go:322] 
	I1114 13:49:32.915851 1220278 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 13:49:32.915910 1220278 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 13:49:32.915960 1220278 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 13:49:32.915965 1220278 kubeadm.go:322] 
	I1114 13:49:32.916020 1220278 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 13:49:32.916094 1220278 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 13:49:32.916162 1220278 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 13:49:32.916166 1220278 kubeadm.go:322] 
	I1114 13:49:32.916249 1220278 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 13:49:32.916327 1220278 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 13:49:32.916332 1220278 kubeadm.go:322] 
	I1114 13:49:32.916418 1220278 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token exlcvv.ck94ijhdfq1109uo \
	I1114 13:49:32.916523 1220278 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:1a1b25420be6487c50639ce0b981e16ee30b54e658d487c3adf6952ff2c4a2c6 \
	I1114 13:49:32.916638 1220278 kubeadm.go:322]     --control-plane 
	I1114 13:49:32.916666 1220278 kubeadm.go:322] 
	I1114 13:49:32.916789 1220278 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 13:49:32.916812 1220278 kubeadm.go:322] 
	I1114 13:49:32.916926 1220278 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token exlcvv.ck94ijhdfq1109uo \
	I1114 13:49:32.917084 1220278 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:1a1b25420be6487c50639ce0b981e16ee30b54e658d487c3adf6952ff2c4a2c6 
	I1114 13:49:32.917107 1220278 cni.go:84] Creating CNI manager for ""
	I1114 13:49:32.917115 1220278 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1114 13:49:32.919892 1220278 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1114 13:49:32.922458 1220278 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1114 13:49:32.927556 1220278 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1114 13:49:32.927578 1220278 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1114 13:49:32.950447 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1114 13:49:33.427430 1220278 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 13:49:33.427514 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:33.427546 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6d8573efb5a7770e21024de23a29d810b200278b minikube.k8s.io/name=ingress-addon-legacy-814110 minikube.k8s.io/updated_at=2023_11_14T13_49_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:33.458632 1220278 ops.go:34] apiserver oom_adj: -16
	I1114 13:49:33.586349 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:33.684982 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:34.278607 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:34.778724 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:35.278863 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:35.778067 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:36.278990 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:36.778921 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:37.278975 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:37.778063 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:38.278612 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:38.779022 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:39.278652 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:39.778830 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:40.278772 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:40.778111 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:41.278293 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:41.778198 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:42.278792 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:42.778701 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:43.278858 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:43.777989 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:44.279014 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:44.778708 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:45.278738 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:45.778570 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:46.278587 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:46.778092 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:47.278904 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:47.778593 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:47.881867 1220278 kubeadm.go:1081] duration metric: took 14.454427426s to wait for elevateKubeSystemPrivileges.
	I1114 13:49:47.881897 1220278 kubeadm.go:406] StartCluster complete in 37.994434447s
	I1114 13:49:47.881913 1220278 settings.go:142] acquiring lock: {Name:mk8b1f62ebfea123b4e39d0037f993206e354b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:49:47.881985 1220278 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 13:49:47.882665 1220278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/kubeconfig: {Name:mkf1191f735848932fc7f3417e1088220acbc478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:49:47.883350 1220278 kapi.go:59] client config for ingress-addon-legacy-814110: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.key", CAFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 13:49:47.884764 1220278 config.go:182] Loaded profile config "ingress-addon-legacy-814110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1114 13:49:47.884785 1220278 cert_rotation.go:137] Starting client certificate rotation controller
	I1114 13:49:47.884731 1220278 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 13:49:47.884822 1220278 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-814110"
	I1114 13:49:47.884836 1220278 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-814110"
	I1114 13:49:47.884889 1220278 host.go:66] Checking if "ingress-addon-legacy-814110" exists ...
	I1114 13:49:47.885172 1220278 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 13:49:47.885294 1220278 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-814110"
	I1114 13:49:47.885308 1220278 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-814110"
	I1114 13:49:47.885362 1220278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-814110 --format={{.State.Status}}
	I1114 13:49:47.885630 1220278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-814110 --format={{.State.Status}}
	I1114 13:49:47.928499 1220278 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 13:49:47.930706 1220278 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 13:49:47.930726 1220278 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 13:49:47.930793 1220278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:49:47.940123 1220278 kapi.go:59] client config for ingress-addon-legacy-814110: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.key", CAFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 13:49:47.940381 1220278 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-814110"
	I1114 13:49:47.940407 1220278 host.go:66] Checking if "ingress-addon-legacy-814110" exists ...
	I1114 13:49:47.940885 1220278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-814110 --format={{.State.Status}}
	I1114 13:49:47.971286 1220278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa Username:docker}
	I1114 13:49:47.991569 1220278 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 13:49:47.991594 1220278 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 13:49:47.991671 1220278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:49:48.021194 1220278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa Username:docker}
	I1114 13:49:48.117098 1220278 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-814110" context rescaled to 1 replicas
	I1114 13:49:48.117188 1220278 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 13:49:48.129051 1220278 out.go:177] * Verifying Kubernetes components...
	I1114 13:49:48.131539 1220278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 13:49:48.147568 1220278 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 13:49:48.158552 1220278 kapi.go:59] client config for ingress-addon-legacy-814110: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.key", CAFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 13:49:48.158891 1220278 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-814110" to be "Ready" ...
	I1114 13:49:48.170957 1220278 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 13:49:48.212636 1220278 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 13:49:48.621800 1220278 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1114 13:49:48.831439 1220278 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1114 13:49:48.833408 1220278 addons.go:502] enable addons completed in 948.660718ms: enabled=[default-storageclass storage-provisioner]
	I1114 13:49:50.448814 1220278 node_ready.go:58] node "ingress-addon-legacy-814110" has status "Ready":"False"
	I1114 13:49:52.449098 1220278 node_ready.go:58] node "ingress-addon-legacy-814110" has status "Ready":"False"
	I1114 13:49:54.948626 1220278 node_ready.go:58] node "ingress-addon-legacy-814110" has status "Ready":"False"
	I1114 13:49:56.449517 1220278 node_ready.go:49] node "ingress-addon-legacy-814110" has status "Ready":"True"
	I1114 13:49:56.449545 1220278 node_ready.go:38] duration metric: took 8.290608275s waiting for node "ingress-addon-legacy-814110" to be "Ready" ...
	I1114 13:49:56.449556 1220278 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 13:49:56.457117 1220278 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-8k4sx" in "kube-system" namespace to be "Ready" ...
	I1114 13:49:58.465393 1220278 pod_ready.go:102] pod "coredns-66bff467f8-8k4sx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-14 13:49:48 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1114 13:50:00.965042 1220278 pod_ready.go:102] pod "coredns-66bff467f8-8k4sx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-14 13:49:48 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1114 13:50:02.968053 1220278 pod_ready.go:102] pod "coredns-66bff467f8-8k4sx" in "kube-system" namespace has status "Ready":"False"
	I1114 13:50:03.967919 1220278 pod_ready.go:92] pod "coredns-66bff467f8-8k4sx" in "kube-system" namespace has status "Ready":"True"
	I1114 13:50:03.967946 1220278 pod_ready.go:81] duration metric: took 7.510794017s waiting for pod "coredns-66bff467f8-8k4sx" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:03.967962 1220278 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-814110" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:03.972405 1220278 pod_ready.go:92] pod "etcd-ingress-addon-legacy-814110" in "kube-system" namespace has status "Ready":"True"
	I1114 13:50:03.972434 1220278 pod_ready.go:81] duration metric: took 4.464733ms waiting for pod "etcd-ingress-addon-legacy-814110" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:03.972449 1220278 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-814110" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:03.977035 1220278 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-814110" in "kube-system" namespace has status "Ready":"True"
	I1114 13:50:03.977063 1220278 pod_ready.go:81] duration metric: took 4.606238ms waiting for pod "kube-apiserver-ingress-addon-legacy-814110" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:03.977076 1220278 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-814110" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:03.981687 1220278 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-814110" in "kube-system" namespace has status "Ready":"True"
	I1114 13:50:03.981710 1220278 pod_ready.go:81] duration metric: took 4.624232ms waiting for pod "kube-controller-manager-ingress-addon-legacy-814110" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:03.981722 1220278 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n98c2" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:03.986360 1220278 pod_ready.go:92] pod "kube-proxy-n98c2" in "kube-system" namespace has status "Ready":"True"
	I1114 13:50:03.986390 1220278 pod_ready.go:81] duration metric: took 4.660064ms waiting for pod "kube-proxy-n98c2" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:03.986406 1220278 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-814110" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:04.162778 1220278 request.go:629] Waited for 176.251358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-814110
	I1114 13:50:04.362701 1220278 request.go:629] Waited for 197.262807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-814110
	I1114 13:50:04.365587 1220278 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-814110" in "kube-system" namespace has status "Ready":"True"
	I1114 13:50:04.365638 1220278 pod_ready.go:81] duration metric: took 379.223782ms waiting for pod "kube-scheduler-ingress-addon-legacy-814110" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:04.365655 1220278 pod_ready.go:38] duration metric: took 7.916080406s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 13:50:04.365669 1220278 api_server.go:52] waiting for apiserver process to appear ...
	I1114 13:50:04.365745 1220278 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 13:50:04.378883 1220278 api_server.go:72] duration metric: took 16.261648378s to wait for apiserver process to appear ...
	I1114 13:50:04.378914 1220278 api_server.go:88] waiting for apiserver healthz status ...
	I1114 13:50:04.378933 1220278 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1114 13:50:04.387887 1220278 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1114 13:50:04.388860 1220278 api_server.go:141] control plane version: v1.18.20
	I1114 13:50:04.388885 1220278 api_server.go:131] duration metric: took 9.964555ms to wait for apiserver health ...
	I1114 13:50:04.388893 1220278 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 13:50:04.563188 1220278 request.go:629] Waited for 174.219528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1114 13:50:04.569298 1220278 system_pods.go:59] 8 kube-system pods found
	I1114 13:50:04.569336 1220278 system_pods.go:61] "coredns-66bff467f8-8k4sx" [9f56ab7a-445a-4d18-9860-986f9b7ddbb0] Running
	I1114 13:50:04.569344 1220278 system_pods.go:61] "etcd-ingress-addon-legacy-814110" [e0bff927-a819-46ca-b5cd-0c502d41e1a1] Running
	I1114 13:50:04.569349 1220278 system_pods.go:61] "kindnet-66n2z" [6e8db226-b9d6-49cd-af22-fcb350c5de74] Running
	I1114 13:50:04.569354 1220278 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-814110" [79eff367-6adb-4c1d-acf5-b40295308f88] Running
	I1114 13:50:04.569359 1220278 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-814110" [c2e8342d-304f-4c20-a39c-54db23a2ee6d] Running
	I1114 13:50:04.569364 1220278 system_pods.go:61] "kube-proxy-n98c2" [efab5402-60be-4b66-b02e-7954cd10b4a2] Running
	I1114 13:50:04.569369 1220278 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-814110" [b4679785-d0f3-41dc-a0a4-1b4ff8e78906] Running
	I1114 13:50:04.569374 1220278 system_pods.go:61] "storage-provisioner" [f869d2a3-1807-444a-9049-9298cc449066] Running
	I1114 13:50:04.569381 1220278 system_pods.go:74] duration metric: took 180.481312ms to wait for pod list to return data ...
	I1114 13:50:04.569393 1220278 default_sa.go:34] waiting for default service account to be created ...
	I1114 13:50:04.762767 1220278 request.go:629] Waited for 193.264122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1114 13:50:04.765421 1220278 default_sa.go:45] found service account: "default"
	I1114 13:50:04.765462 1220278 default_sa.go:55] duration metric: took 196.056153ms for default service account to be created ...
	I1114 13:50:04.765475 1220278 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 13:50:04.962712 1220278 request.go:629] Waited for 197.17306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1114 13:50:04.968582 1220278 system_pods.go:86] 8 kube-system pods found
	I1114 13:50:04.968617 1220278 system_pods.go:89] "coredns-66bff467f8-8k4sx" [9f56ab7a-445a-4d18-9860-986f9b7ddbb0] Running
	I1114 13:50:04.968629 1220278 system_pods.go:89] "etcd-ingress-addon-legacy-814110" [e0bff927-a819-46ca-b5cd-0c502d41e1a1] Running
	I1114 13:50:04.968634 1220278 system_pods.go:89] "kindnet-66n2z" [6e8db226-b9d6-49cd-af22-fcb350c5de74] Running
	I1114 13:50:04.968639 1220278 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-814110" [79eff367-6adb-4c1d-acf5-b40295308f88] Running
	I1114 13:50:04.968645 1220278 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-814110" [c2e8342d-304f-4c20-a39c-54db23a2ee6d] Running
	I1114 13:50:04.968649 1220278 system_pods.go:89] "kube-proxy-n98c2" [efab5402-60be-4b66-b02e-7954cd10b4a2] Running
	I1114 13:50:04.968655 1220278 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-814110" [b4679785-d0f3-41dc-a0a4-1b4ff8e78906] Running
	I1114 13:50:04.968659 1220278 system_pods.go:89] "storage-provisioner" [f869d2a3-1807-444a-9049-9298cc449066] Running
	I1114 13:50:04.968667 1220278 system_pods.go:126] duration metric: took 203.185698ms to wait for k8s-apps to be running ...
	I1114 13:50:04.968674 1220278 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 13:50:04.968735 1220278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 13:50:04.982635 1220278 system_svc.go:56] duration metric: took 13.949817ms WaitForService to wait for kubelet.
	I1114 13:50:04.982666 1220278 kubeadm.go:581] duration metric: took 16.865437278s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 13:50:04.982687 1220278 node_conditions.go:102] verifying NodePressure condition ...
	I1114 13:50:05.163071 1220278 request.go:629] Waited for 180.315201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1114 13:50:05.165920 1220278 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1114 13:50:05.165957 1220278 node_conditions.go:123] node cpu capacity is 2
	I1114 13:50:05.165970 1220278 node_conditions.go:105] duration metric: took 183.277495ms to run NodePressure ...
	I1114 13:50:05.165983 1220278 start.go:228] waiting for startup goroutines ...
	I1114 13:50:05.165990 1220278 start.go:233] waiting for cluster config update ...
	I1114 13:50:05.165999 1220278 start.go:242] writing updated cluster config ...
	I1114 13:50:05.166282 1220278 ssh_runner.go:195] Run: rm -f paused
	I1114 13:50:05.227844 1220278 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1114 13:50:05.230551 1220278 out.go:177] 
	W1114 13:50:05.232321 1220278 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1114 13:50:05.234090 1220278 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1114 13:50:05.236088 1220278 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-814110" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 14 13:54:36 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:54:36.161035432Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c,RepoTags:[k8s.gcr.io/pause:3.2 registry.k8s.io/pause:3.2],RepoDigests:[k8s.gcr.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f registry.k8s.io/pause@sha256:31d3efd12022ffeffb3146bc10ae8beb890c80ed2f07363515580add7ed47636 registry.k8s.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f],Size_:489397,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=40fbc456-99db-4fb8-a0fa-fed67e8e2c4d name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:54:38 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:54:38.238890357Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=1102e6f8-9294-44da-bb36-6e9503d344ea name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:54:38 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:54:38.239161116Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=1102e6f8-9294-44da-bb36-6e9503d344ea name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:54:46 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:54:46.239277172Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=3f2862ef-c142-4c77-b340-ebf022a6061b name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:54:46 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:54:46.239571882Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=3f2862ef-c142-4c77-b340-ebf022a6061b name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:54:51 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:54:51.238659177Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=63d30e9a-1a8c-45d8-9907-597dc11224d5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:54:51 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:54:51.238923840Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=63d30e9a-1a8c-45d8-9907-597dc11224d5 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:54:57 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:54:57.238823005Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=37f6d599-5c9e-4b9c-bfea-c9c472860db9 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:54:57 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:54:57.239111955Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=37f6d599-5c9e-4b9c-bfea-c9c472860db9 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:55:02 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:55:02.238576854Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=9b792e24-81f5-441d-afdf-8cb462ad96da name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:55:02 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:55:02.238847778Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=9b792e24-81f5-441d-afdf-8cb462ad96da name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:55:12 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:55:12.238835539Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=6511838a-b595-47db-86cc-e6e47f691c5d name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:55:12 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:55:12.239110935Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=6511838a-b595-47db-86cc-e6e47f691c5d name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:55:15 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:55:15.238751939Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=120f9b82-ccdb-4012-a4f9-5ecd9acaff9c name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:55:15 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:55:15.239037795Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=120f9b82-ccdb-4012-a4f9-5ecd9acaff9c name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:55:25 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:55:25.238768073Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=1b8b503d-9268-4dfc-9a13-0a976b2d6ff0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:55:25 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:55:25.239055333Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=1b8b503d-9268-4dfc-9a13-0a976b2d6ff0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:55:25 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:55:25.239865206Z" level=info msg="Pulling image: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=8d19c498-c7d2-428b-8f4a-bc394e36f55f name=/runtime.v1alpha2.ImageService/PullImage
	Nov 14 13:55:25 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:55:25.242551482Z" level=info msg="Trying to access \"docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 14 13:55:29 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:55:29.238735817Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=9669485c-5515-4abf-9039-c8463c70d4c0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:55:29 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:55:29.239023716Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=9669485c-5515-4abf-9039-c8463c70d4c0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:55:41 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:55:41.238753288Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=07342e2c-cfe6-4641-bb11-81dc7f7c3ccb name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:55:41 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:55:41.239034451Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=07342e2c-cfe6-4641-bb11-81dc7f7c3ccb name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:55:56 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:55:56.238939805Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=0155bbeb-6178-4883-9d42-e06f161f69f1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:55:56 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:55:56.239291442Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=0155bbeb-6178-4883-9d42-e06f161f69f1 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e2dfea6db0038       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                  6 minutes ago       Running             coredns                   0                   16e72ca153e72       coredns-66bff467f8-8k4sx
	2cf9960ed4483       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2   6 minutes ago       Running             storage-provisioner       0                   7eb3670aa5439       storage-provisioner
	1233093423d4d       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                6 minutes ago       Running             kindnet-cni               0                   4eba235f78dc7       kindnet-66n2z
	3ff47c9dd0749       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                  6 minutes ago       Running             kube-proxy                0                   80b0c62f16b2f       kube-proxy-n98c2
	4e5d19b2f0e82       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                  6 minutes ago       Running             kube-controller-manager   0                   d5943c1440d01       kube-controller-manager-ingress-addon-legacy-814110
	c3514bf1a6e6b       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                  6 minutes ago       Running             kube-apiserver            0                   6e1fc3b419df2       kube-apiserver-ingress-addon-legacy-814110
	1e9198b4f97a6       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                  6 minutes ago       Running             etcd                      0                   4c9e140679323       etcd-ingress-addon-legacy-814110
	92ecb93026e45       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                  6 minutes ago       Running             kube-scheduler            0                   4d92df1fe9b1c       kube-scheduler-ingress-addon-legacy-814110
	
	* 
	* ==> coredns [e2dfea6db0038b864200055ec6a5d37fcf9105316391feb798764e2953c92119] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 45700869df5177c7f3d9f7a279928a55
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:49151 - 49750 "HINFO IN 5663434194049127381.8240556951991835833. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013653597s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-814110
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-814110
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d8573efb5a7770e21024de23a29d810b200278b
	                    minikube.k8s.io/name=ingress-addon-legacy-814110
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T13_49_33_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 13:49:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-814110
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 13:56:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 13:55:06 +0000   Tue, 14 Nov 2023 13:49:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 13:55:06 +0000   Tue, 14 Nov 2023 13:49:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 13:55:06 +0000   Tue, 14 Nov 2023 13:49:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 13:55:06 +0000   Tue, 14 Nov 2023 13:49:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-814110
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 1871bd87db4d4cf8ac58e66a78d35c50
	  System UUID:                0ea239da-03bc-4af9-a9d0-18b9b6d0d8b9
	  Boot ID:                    3bdb9c53-2d63-44b9-be60-6ff1ad471e35
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  ingress-nginx               ingress-nginx-admission-create-wkbjv                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  ingress-nginx               ingress-nginx-admission-patch-9zb8z                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  ingress-nginx               ingress-nginx-controller-7fcf777cb7-9bsww              100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         6m1s
	  kube-system                 coredns-66bff467f8-8k4sx                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m19s
	  kube-system                 etcd-ingress-addon-legacy-814110                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  kube-system                 kindnet-66n2z                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m19s
	  kube-system                 kube-apiserver-ingress-addon-legacy-814110             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-814110    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  kube-system                 kube-proxy-n98c2                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-scheduler-ingress-addon-legacy-814110             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             210Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  6m46s (x5 over 6m46s)  kubelet     Node ingress-addon-legacy-814110 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m46s (x5 over 6m46s)  kubelet     Node ingress-addon-legacy-814110 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m46s (x4 over 6m46s)  kubelet     Node ingress-addon-legacy-814110 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m31s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m31s                  kubelet     Node ingress-addon-legacy-814110 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m31s                  kubelet     Node ingress-addon-legacy-814110 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m31s                  kubelet     Node ingress-addon-legacy-814110 status is now: NodeHasSufficientPID
	  Normal  Starting                 6m19s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                6m11s                  kubelet     Node ingress-addon-legacy-814110 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001143] FS-Cache: O-key=[8] '84643b0000000000'
	[  +0.000762] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000999] FS-Cache: N-cookie d=00000000fbc4fe34{9p.inode} n=000000002244812d
	[  +0.001146] FS-Cache: N-key=[8] '84643b0000000000'
	[  +0.003454] FS-Cache: Duplicate cookie detected
	[  +0.000756] FS-Cache: O-cookie c=0000005f [p=0000005d fl=226 nc=0 na=1]
	[  +0.001057] FS-Cache: O-cookie d=00000000fbc4fe34{9p.inode} n=00000000ecb0ec67
	[  +0.001110] FS-Cache: O-key=[8] '84643b0000000000'
	[  +0.000749] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001022] FS-Cache: N-cookie d=00000000fbc4fe34{9p.inode} n=0000000038574f41
	[  +0.001139] FS-Cache: N-key=[8] '84643b0000000000'
	[  +3.132585] FS-Cache: Duplicate cookie detected
	[  +0.000755] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001032] FS-Cache: O-cookie d=00000000fbc4fe34{9p.inode} n=00000000e83a4aa7
	[  +0.001160] FS-Cache: O-key=[8] '83643b0000000000'
	[  +0.000753] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=00000000fbc4fe34{9p.inode} n=000000002244812d
	[  +0.001111] FS-Cache: N-key=[8] '83643b0000000000'
	[  +0.323161] FS-Cache: Duplicate cookie detected
	[  +0.000805] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001104] FS-Cache: O-cookie d=00000000fbc4fe34{9p.inode} n=0000000060b8cdea
	[  +0.001286] FS-Cache: O-key=[8] '89643b0000000000'
	[  +0.000771] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.001023] FS-Cache: N-cookie d=00000000fbc4fe34{9p.inode} n=00000000495e4eb3
	[  +0.001223] FS-Cache: N-key=[8] '89643b0000000000'
	
	* 
	* ==> etcd [1e9198b4f97a6f3d51b839bca467dcfafccfb90dc04c16d668c85ab153a5b7fd] <==
	* 2023-11-14 13:49:24.759294 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-11-14 13:49:24.796679 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/11/14 13:49:24 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-14 13:49:24.829319 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-11-14 13:49:24.846494 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-14 13:49:24.944876 I | embed: listening for peers on 192.168.49.2:2380
	2023-11-14 13:49:24.954985 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/11/14 13:49:25 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/11/14 13:49:25 INFO: aec36adc501070cc became candidate at term 2
	raft2023/11/14 13:49:25 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/11/14 13:49:25 INFO: aec36adc501070cc became leader at term 2
	raft2023/11/14 13:49:25 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-11-14 13:49:25.685393 I | etcdserver: published {Name:ingress-addon-legacy-814110 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-11-14 13:49:25.685513 I | embed: ready to serve client requests
	2023-11-14 13:49:25.685751 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-14 13:49:25.686235 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-14 13:49:25.686381 I | etcdserver/api: enabled capabilities for version 3.4
	2023-11-14 13:49:25.686447 I | embed: ready to serve client requests
	2023-11-14 13:49:25.686923 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-14 13:49:25.695626 I | embed: serving client requests on 192.168.49.2:2379
	2023-11-14 13:49:48.351462 W | etcdserver: read-only range request "key:\"/registry/replicasets/kube-system/coredns-66bff467f8\" " with result "range_response_count:1 size:3683" took too long (109.417065ms) to execute
	2023-11-14 13:49:48.351626 W | etcdserver: read-only range request "key:\"/registry/daemonsets/kube-system/kindnet\" " with result "range_response_count:1 size:4688" took too long (134.415983ms) to execute
	2023-11-14 13:49:48.353506 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-814110\" " with result "range_response_count:1 size:6504" took too long (136.819766ms) to execute
	2023-11-14 13:49:48.482373 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-814110\" " with result "range_response_count:1 size:6504" took too long (142.363123ms) to execute
	2023-11-14 13:49:48.493927 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-proxy-n98c2\" " with result "range_response_count:1 size:3588" took too long (157.528677ms) to execute
	
	* 
	* ==> kernel <==
	*  13:56:07 up 10:38,  0 users,  load average: 0.13, 0.31, 0.81
	Linux ingress-addon-legacy-814110 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [1233093423d4dd2cd51a48a275bd031397a8c8cc3f80caad8be47bdf0ce8d792] <==
	* I1114 13:54:01.803138       1 main.go:227] handling current node
	I1114 13:54:11.807194       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:54:11.807225       1 main.go:227] handling current node
	I1114 13:54:21.816497       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:54:21.816525       1 main.go:227] handling current node
	I1114 13:54:31.827565       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:54:31.827595       1 main.go:227] handling current node
	I1114 13:54:41.832234       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:54:41.832266       1 main.go:227] handling current node
	I1114 13:54:51.836315       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:54:51.836344       1 main.go:227] handling current node
	I1114 13:55:01.844123       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:55:01.844157       1 main.go:227] handling current node
	I1114 13:55:11.853114       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:55:11.853143       1 main.go:227] handling current node
	I1114 13:55:21.857813       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:55:21.857845       1 main.go:227] handling current node
	I1114 13:55:31.862044       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:55:31.862075       1 main.go:227] handling current node
	I1114 13:55:41.872040       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:55:41.872065       1 main.go:227] handling current node
	I1114 13:55:51.877846       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:55:51.877991       1 main.go:227] handling current node
	I1114 13:56:01.885629       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:56:01.885660       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [c3514bf1a6e6b88e0d142b54a893762bacd6330d9afa8404a5bf8e09137177a0] <==
	* I1114 13:49:29.658848       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1114 13:49:29.658888       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	E1114 13:49:29.741979       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1114 13:49:29.760768       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1114 13:49:29.760915       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1114 13:49:29.779780       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1114 13:49:29.780822       1 cache.go:39] Caches are synced for autoregister controller
	I1114 13:49:29.854094       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1114 13:49:30.650300       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1114 13:49:30.650330       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1114 13:49:30.655781       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1114 13:49:30.659452       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1114 13:49:30.659484       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1114 13:49:31.053430       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1114 13:49:31.105755       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1114 13:49:31.177479       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1114 13:49:31.178534       1 controller.go:609] quota admission added evaluator for: endpoints
	I1114 13:49:31.183955       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1114 13:49:32.048572       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1114 13:49:32.776089       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1114 13:49:32.861246       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1114 13:49:36.165694       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1114 13:49:47.996234       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1114 13:49:48.001970       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1114 13:50:06.175721       1 controller.go:609] quota admission added evaluator for: jobs.batch
	
	* 
	* ==> kube-controller-manager [4e5d19b2f0e8227e0d1d4a26093125a427c7b7538a1409bf2f30f3c0ad038fba] <==
	* I1114 13:49:47.994421       1 shared_informer.go:230] Caches are synced for GC 
	I1114 13:49:47.995352       1 shared_informer.go:230] Caches are synced for HPA 
	I1114 13:49:47.995408       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I1114 13:49:48.030677       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"370b3dc4-b3b0-49f7-b9aa-c2145e8d7601", APIVersion:"apps/v1", ResourceVersion:"210", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	I1114 13:49:48.055192       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1114 13:49:48.055412       1 shared_informer.go:230] Caches are synced for stateful set 
	I1114 13:49:48.055419       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1114 13:49:48.060875       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1114 13:49:48.104050       1 shared_informer.go:230] Caches are synced for resource quota 
	I1114 13:49:48.104615       1 shared_informer.go:230] Caches are synced for disruption 
	I1114 13:49:48.104683       1 disruption.go:339] Sending events to api server.
	I1114 13:49:48.104765       1 shared_informer.go:230] Caches are synced for resource quota 
	I1114 13:49:48.125996       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"0890a42d-76c8-47c6-bbeb-6e12fd4ce104", APIVersion:"apps/v1", ResourceVersion:"335", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-rx7dp
	I1114 13:49:48.126033       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"679a21da-915c-4cf6-8d67-398ff6e38ff7", APIVersion:"apps/v1", ResourceVersion:"234", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-66n2z
	I1114 13:49:48.176058       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"0890a42d-76c8-47c6-bbeb-6e12fd4ce104", APIVersion:"apps/v1", ResourceVersion:"335", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-8k4sx
	I1114 13:49:48.176092       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"63fc70cb-c20d-4678-abaf-fe3b26ca6316", APIVersion:"apps/v1", ResourceVersion:"217", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-n98c2
	I1114 13:49:48.367425       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"370b3dc4-b3b0-49f7-b9aa-c2145e8d7601", APIVersion:"apps/v1", ResourceVersion:"348", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	E1114 13:49:48.428167       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"679a21da-915c-4cf6-8d67-398ff6e38ff7", ResourceVersion:"234", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63835566573, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40013e8b00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40013e8b20)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40013e8b80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40013e8c40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40013e8cc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40013e8ce0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40013e8d00)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40013e8d40)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40010b4000), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000a57158), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400058d110), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000e1d0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000a57220)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1114 13:49:48.688001       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"0890a42d-76c8-47c6-bbeb-6e12fd4ce104", APIVersion:"apps/v1", ResourceVersion:"356", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-rx7dp
	I1114 13:49:57.908068       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1114 13:50:06.162797       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"f0b5351b-a056-4980-8005-7a8a6613c50a", APIVersion:"apps/v1", ResourceVersion:"476", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1114 13:50:06.198252       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"9716d439-035f-4b5f-a317-dcc05a62c9c9", APIVersion:"apps/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-9bsww
	I1114 13:50:06.209522       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b31188e0-261d-414f-a0fe-881dfb5f680d", APIVersion:"batch/v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-wkbjv
	I1114 13:50:06.267765       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"115943da-a4ce-4353-87af-e2738aff5adf", APIVersion:"batch/v1", ResourceVersion:"493", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-9zb8z
	
	* 
	* ==> kube-proxy [3ff47c9dd0749b62e315cd73745025e93b632a3b2359ad311a1319f9c6db623c] <==
	* W1114 13:49:48.897655       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1114 13:49:48.912917       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1114 13:49:48.912968       1 server_others.go:186] Using iptables Proxier.
	I1114 13:49:48.913297       1 server.go:583] Version: v1.18.20
	I1114 13:49:48.915960       1 config.go:315] Starting service config controller
	I1114 13:49:48.916154       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1114 13:49:48.916422       1 config.go:133] Starting endpoints config controller
	I1114 13:49:48.916459       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1114 13:49:49.016524       1 shared_informer.go:230] Caches are synced for service config 
	I1114 13:49:49.016725       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [92ecb93026e45063bce6707ecf57a8e58efb9cfb6c1adbfefaa07a4540a4f13a] <==
	* I1114 13:49:29.824089       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1114 13:49:29.825291       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1114 13:49:29.825316       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1114 13:49:29.829942       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1114 13:49:29.830692       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 13:49:29.830755       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1114 13:49:29.830810       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1114 13:49:29.830817       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1114 13:49:29.831690       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1114 13:49:29.831749       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1114 13:49:29.831810       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1114 13:49:29.831864       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 13:49:29.833722       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1114 13:49:29.833824       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1114 13:49:29.833932       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1114 13:49:29.834327       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1114 13:49:30.722291       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1114 13:49:30.799086       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1114 13:49:30.809179       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1114 13:49:30.825155       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1114 13:49:30.870477       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1114 13:49:30.879739       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1114 13:49:31.325458       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1114 13:49:48.534871       1 factory.go:503] pod: kube-system/coredns-66bff467f8-8k4sx is already present in the active queue
	E1114 13:49:48.634646       1 factory.go:503] pod: kube-system/coredns-66bff467f8-rx7dp is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Nov 14 13:53:56 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:53:56.664040    1633 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 14 13:53:56 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:53:56.664096    1633 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 14 13:53:56 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:53:56.664238    1633 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 14 13:53:56 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:53:56.664275    1633 pod_workers.go:191] Error syncing pod 8e74885b-9f59-40e3-bc05-1cb032cff8f3 ("ingress-nginx-admission-create-wkbjv_ingress-nginx(8e74885b-9f59-40e3-bc05-1cb032cff8f3)"), skipping: failed to "StartContainer" for "create" with ErrImagePull: "rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:d402db4f47a0e1007e8feb5e57d93c44f6c98ebf489ca77bacb91f8eefd2419b in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Nov 14 13:54:10 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:54:10.240076    1633 pod_workers.go:191] Error syncing pod 8e74885b-9f59-40e3-bc05-1cb032cff8f3 ("ingress-nginx-admission-create-wkbjv_ingress-nginx(8e74885b-9f59-40e3-bc05-1cb032cff8f3)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 14 13:54:17 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:54:17.309423    1633 secret.go:195] Couldn't get secret ingress-nginx/ingress-nginx-admission: secret "ingress-nginx-admission" not found
	Nov 14 13:54:17 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:54:17.309527    1633 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/ec022f32-b5da-4608-a798-33f084e15b28-webhook-cert podName:ec022f32-b5da-4608-a798-33f084e15b28 nodeName:}" failed. No retries permitted until 2023-11-14 13:56:19.309502281 +0000 UTC m=+406.612913769 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ec022f32-b5da-4608-a798-33f084e15b28-webhook-cert\") pod \"ingress-nginx-controller-7fcf777cb7-9bsww\" (UID: \"ec022f32-b5da-4608-a798-33f084e15b28\") : secret \"ingress-nginx-admission\" not found"
	Nov 14 13:54:24 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:54:24.239487    1633 pod_workers.go:191] Error syncing pod 8e74885b-9f59-40e3-bc05-1cb032cff8f3 ("ingress-nginx-admission-create-wkbjv_ingress-nginx(8e74885b-9f59-40e3-bc05-1cb032cff8f3)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 14 13:54:25 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:54:25.238800    1633 kubelet.go:1703] Unable to attach or mount volumes for pod "ingress-nginx-controller-7fcf777cb7-9bsww_ingress-nginx(ec022f32-b5da-4608-a798-33f084e15b28)": unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-vtjkr]: timed out waiting for the condition; skipping pod
	Nov 14 13:54:25 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:54:25.238853    1633 pod_workers.go:191] Error syncing pod ec022f32-b5da-4608-a798-33f084e15b28 ("ingress-nginx-controller-7fcf777cb7-9bsww_ingress-nginx(ec022f32-b5da-4608-a798-33f084e15b28)"), skipping: unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-vtjkr]: timed out waiting for the condition
	Nov 14 13:54:26 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:54:26.944459    1633 remote_image.go:113] PullImage "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" from image service failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 14 13:54:26 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:54:26.944527    1633 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 14 13:54:26 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:54:26.944610    1633 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 14 13:54:26 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:54:26.944648    1633 pod_workers.go:191] Error syncing pod a7b4d5e3-0c04-4f31-85da-0fc14bc4a673 ("ingress-nginx-admission-patch-9zb8z_ingress-nginx(a7b4d5e3-0c04-4f31-85da-0fc14bc4a673)"), skipping: failed to "StartContainer" for "patch" with ErrImagePull: "rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Nov 14 13:54:35 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:54:35.239321    1633 pod_workers.go:191] Error syncing pod 8e74885b-9f59-40e3-bc05-1cb032cff8f3 ("ingress-nginx-admission-create-wkbjv_ingress-nginx(8e74885b-9f59-40e3-bc05-1cb032cff8f3)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 14 13:54:36 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:54:36.282563    1633 container_manager_linux.go:512] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /docker/9a9e7aa0fbf6df369770f5cba56640f8d24be3ea2a261d6f6825a0ff065ba7a4, memory: /docker/9a9e7aa0fbf6df369770f5cba56640f8d24be3ea2a261d6f6825a0ff065ba7a4/system.slice/kubelet.service
	Nov 14 13:54:38 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:54:38.239646    1633 pod_workers.go:191] Error syncing pod a7b4d5e3-0c04-4f31-85da-0fc14bc4a673 ("ingress-nginx-admission-patch-9zb8z_ingress-nginx(a7b4d5e3-0c04-4f31-85da-0fc14bc4a673)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 14 13:54:46 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:54:46.239925    1633 pod_workers.go:191] Error syncing pod 8e74885b-9f59-40e3-bc05-1cb032cff8f3 ("ingress-nginx-admission-create-wkbjv_ingress-nginx(8e74885b-9f59-40e3-bc05-1cb032cff8f3)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 14 13:54:51 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:54:51.239524    1633 pod_workers.go:191] Error syncing pod a7b4d5e3-0c04-4f31-85da-0fc14bc4a673 ("ingress-nginx-admission-patch-9zb8z_ingress-nginx(a7b4d5e3-0c04-4f31-85da-0fc14bc4a673)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 14 13:54:57 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:54:57.239425    1633 pod_workers.go:191] Error syncing pod 8e74885b-9f59-40e3-bc05-1cb032cff8f3 ("ingress-nginx-admission-create-wkbjv_ingress-nginx(8e74885b-9f59-40e3-bc05-1cb032cff8f3)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 14 13:55:02 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:55:02.239095    1633 pod_workers.go:191] Error syncing pod a7b4d5e3-0c04-4f31-85da-0fc14bc4a673 ("ingress-nginx-admission-patch-9zb8z_ingress-nginx(a7b4d5e3-0c04-4f31-85da-0fc14bc4a673)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 14 13:55:12 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:55:12.239546    1633 pod_workers.go:191] Error syncing pod 8e74885b-9f59-40e3-bc05-1cb032cff8f3 ("ingress-nginx-admission-create-wkbjv_ingress-nginx(8e74885b-9f59-40e3-bc05-1cb032cff8f3)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 14 13:55:15 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:55:15.239260    1633 pod_workers.go:191] Error syncing pod a7b4d5e3-0c04-4f31-85da-0fc14bc4a673 ("ingress-nginx-admission-patch-9zb8z_ingress-nginx(a7b4d5e3-0c04-4f31-85da-0fc14bc4a673)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 14 13:55:29 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:55:29.239244    1633 pod_workers.go:191] Error syncing pod a7b4d5e3-0c04-4f31-85da-0fc14bc4a673 ("ingress-nginx-admission-patch-9zb8z_ingress-nginx(a7b4d5e3-0c04-4f31-85da-0fc14bc4a673)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 14 13:55:41 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:55:41.239279    1633 pod_workers.go:191] Error syncing pod a7b4d5e3-0c04-4f31-85da-0fc14bc4a673 ("ingress-nginx-admission-patch-9zb8z_ingress-nginx(a7b4d5e3-0c04-4f31-85da-0fc14bc4a673)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	
	* 
	* ==> storage-provisioner [2cf9960ed4483ad88f1ec2b9a17f53e101cd65dcbc0dc50339d31b26821cb572] <==
	* I1114 13:49:58.723518       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1114 13:49:58.735493       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1114 13:49:58.735582       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1114 13:49:58.744035       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1114 13:49:58.744744       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7db563cf-348e-49ef-b918-ab8cd4b5c9ea", APIVersion:"v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-814110_14709fb5-1c67-4b61-aff7-1b16d62de5fe became leader
	I1114 13:49:58.745091       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-814110_14709fb5-1c67-4b61-aff7-1b16d62de5fe!
	I1114 13:49:58.846155       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-814110_14709fb5-1c67-4b61-aff7-1b16d62de5fe!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-814110 -n ingress-addon-legacy-814110
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-814110 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-wkbjv ingress-nginx-admission-patch-9zb8z ingress-nginx-controller-7fcf777cb7-9bsww
helpers_test.go:274: ======> post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ingress-addon-legacy-814110 describe pod ingress-nginx-admission-create-wkbjv ingress-nginx-admission-patch-9zb8z ingress-nginx-controller-7fcf777cb7-9bsww
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-814110 describe pod ingress-nginx-admission-create-wkbjv ingress-nginx-admission-patch-9zb8z ingress-nginx-controller-7fcf777cb7-9bsww: exit status 1 (87.170575ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-wkbjv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9zb8z" not found
	Error from server (NotFound): pods "ingress-nginx-controller-7fcf777cb7-9bsww" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ingress-addon-legacy-814110 describe pod ingress-nginx-admission-create-wkbjv ingress-nginx-admission-patch-9zb8z ingress-nginx-controller-7fcf777cb7-9bsww: exit status 1
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (363.70s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (92.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-814110 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1114 13:57:14.368714 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
addons_test.go:206: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-814110 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: exit status 1 (1m30.072968916s)

                                                
                                                
** stderr ** 
	error: timed out waiting for the condition on pods/ingress-nginx-controller-7fcf777cb7-9bsww

                                                
                                                
** /stderr **
addons_test.go:207: failed waiting for ingress-nginx-controller : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-814110
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-814110:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9a9e7aa0fbf6df369770f5cba56640f8d24be3ea2a261d6f6825a0ff065ba7a4",
	        "Created": "2023-11-14T13:48:57.286856822Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1220737,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-14T13:48:57.611502817Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:977f9df3a3e2dccc16de7b5e8115e5e1294a98b99d56135cce7538135e7a7a9d",
	        "ResolvConfPath": "/var/lib/docker/containers/9a9e7aa0fbf6df369770f5cba56640f8d24be3ea2a261d6f6825a0ff065ba7a4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9a9e7aa0fbf6df369770f5cba56640f8d24be3ea2a261d6f6825a0ff065ba7a4/hostname",
	        "HostsPath": "/var/lib/docker/containers/9a9e7aa0fbf6df369770f5cba56640f8d24be3ea2a261d6f6825a0ff065ba7a4/hosts",
	        "LogPath": "/var/lib/docker/containers/9a9e7aa0fbf6df369770f5cba56640f8d24be3ea2a261d6f6825a0ff065ba7a4/9a9e7aa0fbf6df369770f5cba56640f8d24be3ea2a261d6f6825a0ff065ba7a4-json.log",
	        "Name": "/ingress-addon-legacy-814110",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-814110:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-814110",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0b5abde21255f48f67b79dc09304078df88614923d36dcb4ae5f6f95af6a1bf3-init/diff:/var/lib/docker/overlay2/ad9b1528ccc99a2a23c8205d781cfd6ce01aa0662a87aad99178910b13bfc77f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0b5abde21255f48f67b79dc09304078df88614923d36dcb4ae5f6f95af6a1bf3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0b5abde21255f48f67b79dc09304078df88614923d36dcb4ae5f6f95af6a1bf3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0b5abde21255f48f67b79dc09304078df88614923d36dcb4ae5f6f95af6a1bf3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-814110",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-814110/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-814110",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-814110",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-814110",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e57fb3d18afa63c048b8eaa5b7da18b8c5559c45dfce92857f22cf94e60de464",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34294"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34293"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34290"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34292"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34291"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e57fb3d18afa",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-814110": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "9a9e7aa0fbf6",
	                        "ingress-addon-legacy-814110"
	                    ],
	                    "NetworkID": "0cb255c5f52bb06a15e0868a02b7512aefce6a3c8849e0c9aedb587574294a74",
	                    "EndpointID": "9509e8fb1672e2192e1868fb502423b5c96afc625e0fdb408f57c8750c3abe55",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ingress-addon-legacy-814110 -n ingress-addon-legacy-814110
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-814110 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p ingress-addon-legacy-814110 logs -n 25: (1.415062069s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image          | functional-943397 image ls                                             | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:47 UTC | 14 Nov 23 13:47 UTC |
	| image          | functional-943397 image load                                           | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:47 UTC | 14 Nov 23 13:47 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-943397 image ls                                             | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:47 UTC | 14 Nov 23 13:47 UTC |
	| image          | functional-943397 image save --daemon                                  | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:47 UTC | 14 Nov 23 13:48 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-943397               |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /etc/test/nested/copy/1191690/hosts                                    |                             |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /etc/ssl/certs/1191690.pem                                             |                             |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /usr/share/ca-certificates/1191690.pem                                 |                             |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /etc/ssl/certs/51391683.0                                              |                             |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /etc/ssl/certs/11916902.pem                                            |                             |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /usr/share/ca-certificates/11916902.pem                                |                             |         |         |                     |                     |
	| ssh            | functional-943397 ssh sudo cat                                         | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                              |                             |         |         |                     |                     |
	| image          | functional-943397                                                      | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-943397                                                      | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-943397 ssh pgrep                                            | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-943397 image build -t                                       | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | localhost/my-image:functional-943397                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-943397 image ls                                             | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	| image          | functional-943397                                                      | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-943397                                                      | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| update-context | functional-943397                                                      | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-943397                                                      | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-943397                                                      | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| delete         | -p functional-943397                                                   | functional-943397           | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:48 UTC |
	| start          | -p ingress-addon-legacy-814110                                         | ingress-addon-legacy-814110 | jenkins | v1.32.0 | 14 Nov 23 13:48 UTC | 14 Nov 23 13:50 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                                                   |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-814110                                            | ingress-addon-legacy-814110 | jenkins | v1.32.0 | 14 Nov 23 13:50 UTC |                     |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-814110                                            | ingress-addon-legacy-814110 | jenkins | v1.32.0 | 14 Nov 23 13:56 UTC | 14 Nov 23 13:56 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 13:48:33
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 13:48:33.163611 1220278 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:48:33.163759 1220278 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:48:33.163769 1220278 out.go:309] Setting ErrFile to fd 2...
	I1114 13:48:33.163775 1220278 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:48:33.164044 1220278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
	I1114 13:48:33.164462 1220278 out.go:303] Setting JSON to false
	I1114 13:48:33.165459 1220278 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":37860,"bootTime":1699931854,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1114 13:48:33.165538 1220278 start.go:138] virtualization:  
	I1114 13:48:33.167856 1220278 out.go:177] * [ingress-addon-legacy-814110] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1114 13:48:33.170172 1220278 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 13:48:33.172028 1220278 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 13:48:33.170351 1220278 notify.go:220] Checking for updates...
	I1114 13:48:33.173835 1220278 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 13:48:33.175934 1220278 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	I1114 13:48:33.177801 1220278 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1114 13:48:33.179955 1220278 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 13:48:33.182416 1220278 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 13:48:33.206511 1220278 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1114 13:48:33.206618 1220278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:48:33.288027 1220278 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-14 13:48:33.277409035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:48:33.288152 1220278 docker.go:295] overlay module found
	I1114 13:48:33.290460 1220278 out.go:177] * Using the docker driver based on user configuration
	I1114 13:48:33.292469 1220278 start.go:298] selected driver: docker
	I1114 13:48:33.292486 1220278 start.go:902] validating driver "docker" against <nil>
	I1114 13:48:33.292507 1220278 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 13:48:33.293358 1220278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:48:33.360617 1220278 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-14 13:48:33.351179765 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:48:33.360763 1220278 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1114 13:48:33.360990 1220278 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1114 13:48:33.363011 1220278 out.go:177] * Using Docker driver with root privileges
	I1114 13:48:33.364965 1220278 cni.go:84] Creating CNI manager for ""
	I1114 13:48:33.364986 1220278 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1114 13:48:33.364997 1220278 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1114 13:48:33.365014 1220278 start_flags.go:323] config:
	{Name:ingress-addon-legacy-814110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-814110 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:48:33.367454 1220278 out.go:177] * Starting control plane node ingress-addon-legacy-814110 in cluster ingress-addon-legacy-814110
	I1114 13:48:33.369323 1220278 cache.go:121] Beginning downloading kic base image for docker with crio
	I1114 13:48:33.371429 1220278 out.go:177] * Pulling base image ...
	I1114 13:48:33.373447 1220278 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1114 13:48:33.373539 1220278 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon
	I1114 13:48:33.390734 1220278 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon, skipping pull
	I1114 13:48:33.390763 1220278 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 exists in daemon, skipping load
	I1114 13:48:33.448949 1220278 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1114 13:48:33.448971 1220278 cache.go:56] Caching tarball of preloaded images
	I1114 13:48:33.449158 1220278 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1114 13:48:33.451487 1220278 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1114 13:48:33.453873 1220278 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1114 13:48:33.577611 1220278 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4?checksum=md5:8ddd7f37d9a9977fe856222993d36c3d -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4
	I1114 13:48:49.379495 1220278 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1114 13:48:49.379606 1220278 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 ...
	I1114 13:48:50.571379 1220278 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1114 13:48:50.571766 1220278 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/config.json ...
	I1114 13:48:50.571803 1220278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/config.json: {Name:mkf4549114595f007cb9c1dcef1d85ffa7059f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:48:50.571992 1220278 cache.go:194] Successfully downloaded all kic artifacts
	I1114 13:48:50.572018 1220278 start.go:365] acquiring machines lock for ingress-addon-legacy-814110: {Name:mkca719b9d5f584c15d6e33b8df461d5b71aacdb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 13:48:50.572084 1220278 start.go:369] acquired machines lock for "ingress-addon-legacy-814110" in 53.152µs
	I1114 13:48:50.572105 1220278 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-814110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-814110 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 13:48:50.572181 1220278 start.go:125] createHost starting for "" (driver="docker")
	I1114 13:48:50.574720 1220278 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1114 13:48:50.574952 1220278 start.go:159] libmachine.API.Create for "ingress-addon-legacy-814110" (driver="docker")
	I1114 13:48:50.574996 1220278 client.go:168] LocalClient.Create starting
	I1114 13:48:50.575067 1220278 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem
	I1114 13:48:50.575102 1220278 main.go:141] libmachine: Decoding PEM data...
	I1114 13:48:50.575122 1220278 main.go:141] libmachine: Parsing certificate...
	I1114 13:48:50.575184 1220278 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem
	I1114 13:48:50.575207 1220278 main.go:141] libmachine: Decoding PEM data...
	I1114 13:48:50.575222 1220278 main.go:141] libmachine: Parsing certificate...
	I1114 13:48:50.575589 1220278 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-814110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1114 13:48:50.593896 1220278 cli_runner.go:211] docker network inspect ingress-addon-legacy-814110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1114 13:48:50.593986 1220278 network_create.go:281] running [docker network inspect ingress-addon-legacy-814110] to gather additional debugging logs...
	I1114 13:48:50.594007 1220278 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-814110
	W1114 13:48:50.611187 1220278 cli_runner.go:211] docker network inspect ingress-addon-legacy-814110 returned with exit code 1
	I1114 13:48:50.611221 1220278 network_create.go:284] error running [docker network inspect ingress-addon-legacy-814110]: docker network inspect ingress-addon-legacy-814110: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-814110 not found
	I1114 13:48:50.611237 1220278 network_create.go:286] output of [docker network inspect ingress-addon-legacy-814110]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-814110 not found
	
	** /stderr **
	I1114 13:48:50.611339 1220278 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1114 13:48:50.630255 1220278 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400044b930}
	I1114 13:48:50.630298 1220278 network_create.go:124] attempt to create docker network ingress-addon-legacy-814110 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1114 13:48:50.630358 1220278 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-814110 ingress-addon-legacy-814110
	I1114 13:48:50.709089 1220278 network_create.go:108] docker network ingress-addon-legacy-814110 192.168.49.0/24 created
	I1114 13:48:50.709123 1220278 kic.go:121] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-814110" container
	I1114 13:48:50.709209 1220278 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1114 13:48:50.725637 1220278 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-814110 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-814110 --label created_by.minikube.sigs.k8s.io=true
	I1114 13:48:50.743823 1220278 oci.go:103] Successfully created a docker volume ingress-addon-legacy-814110
	I1114 13:48:50.743909 1220278 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-814110-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-814110 --entrypoint /usr/bin/test -v ingress-addon-legacy-814110:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -d /var/lib
	I1114 13:48:52.293228 1220278 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-814110-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-814110 --entrypoint /usr/bin/test -v ingress-addon-legacy-814110:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -d /var/lib: (1.549274169s)
	I1114 13:48:52.293263 1220278 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-814110
	I1114 13:48:52.293277 1220278 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1114 13:48:52.293298 1220278 kic.go:194] Starting extracting preloaded images to volume ...
	I1114 13:48:52.293397 1220278 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-814110:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir
	I1114 13:48:57.204649 1220278 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-814110:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir: (4.911209557s)
	I1114 13:48:57.204691 1220278 kic.go:203] duration metric: took 4.911390 seconds to extract preloaded images to volume
	W1114 13:48:57.204834 1220278 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1114 13:48:57.204968 1220278 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1114 13:48:57.270920 1220278 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-814110 --name ingress-addon-legacy-814110 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-814110 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-814110 --network ingress-addon-legacy-814110 --ip 192.168.49.2 --volume ingress-addon-legacy-814110:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24
	I1114 13:48:57.620677 1220278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-814110 --format={{.State.Running}}
	I1114 13:48:57.642989 1220278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-814110 --format={{.State.Status}}
	I1114 13:48:57.666636 1220278 cli_runner.go:164] Run: docker exec ingress-addon-legacy-814110 stat /var/lib/dpkg/alternatives/iptables
	I1114 13:48:57.732348 1220278 oci.go:144] the created container "ingress-addon-legacy-814110" has a running status.
	I1114 13:48:57.732375 1220278 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa...
	I1114 13:48:58.653466 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1114 13:48:58.653512 1220278 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1114 13:48:58.677504 1220278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-814110 --format={{.State.Status}}
	I1114 13:48:58.701064 1220278 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1114 13:48:58.701087 1220278 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-814110 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1114 13:48:58.768205 1220278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-814110 --format={{.State.Status}}
	I1114 13:48:58.787416 1220278 machine.go:88] provisioning docker machine ...
	I1114 13:48:58.787455 1220278 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-814110"
	I1114 13:48:58.787518 1220278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:48:58.815016 1220278 main.go:141] libmachine: Using SSH client type: native
	I1114 13:48:58.815578 1220278 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34294 <nil> <nil>}
	I1114 13:48:58.815598 1220278 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-814110 && echo "ingress-addon-legacy-814110" | sudo tee /etc/hostname
	I1114 13:48:58.980043 1220278 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-814110
	
	I1114 13:48:58.980193 1220278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:48:58.999841 1220278 main.go:141] libmachine: Using SSH client type: native
	I1114 13:48:59.000248 1220278 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34294 <nil> <nil>}
	I1114 13:48:59.000267 1220278 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-814110' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-814110/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-814110' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 13:48:59.141895 1220278 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 13:48:59.141926 1220278 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17581-1186318/.minikube CaCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17581-1186318/.minikube}
	I1114 13:48:59.141945 1220278 ubuntu.go:177] setting up certificates
	I1114 13:48:59.141953 1220278 provision.go:83] configureAuth start
	I1114 13:48:59.142013 1220278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-814110
	I1114 13:48:59.159522 1220278 provision.go:138] copyHostCerts
	I1114 13:48:59.159563 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem
	I1114 13:48:59.159593 1220278 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem, removing ...
	I1114 13:48:59.159599 1220278 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem
	I1114 13:48:59.159679 1220278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem (1082 bytes)
	I1114 13:48:59.159755 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem
	I1114 13:48:59.159772 1220278 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem, removing ...
	I1114 13:48:59.159776 1220278 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem
	I1114 13:48:59.159800 1220278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem (1123 bytes)
	I1114 13:48:59.159836 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem
	I1114 13:48:59.159851 1220278 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem, removing ...
	I1114 13:48:59.159855 1220278 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem
	I1114 13:48:59.159878 1220278 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem (1675 bytes)
	I1114 13:48:59.159918 1220278 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-814110 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-814110]
	I1114 13:48:59.314222 1220278 provision.go:172] copyRemoteCerts
	I1114 13:48:59.314295 1220278 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 13:48:59.314341 1220278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:48:59.332186 1220278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa Username:docker}
	I1114 13:48:59.430993 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1114 13:48:59.431113 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 13:48:59.458858 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1114 13:48:59.458922 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 13:48:59.486759 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1114 13:48:59.486820 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1114 13:48:59.514330 1220278 provision.go:86] duration metric: configureAuth took 372.36419ms
	I1114 13:48:59.514356 1220278 ubuntu.go:193] setting minikube options for container-runtime
	I1114 13:48:59.514553 1220278 config.go:182] Loaded profile config "ingress-addon-legacy-814110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1114 13:48:59.514658 1220278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:48:59.532988 1220278 main.go:141] libmachine: Using SSH client type: native
	I1114 13:48:59.533423 1220278 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34294 <nil> <nil>}
	I1114 13:48:59.533459 1220278 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 13:48:59.812659 1220278 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 13:48:59.812721 1220278 machine.go:91] provisioned docker machine in 1.025281287s
	I1114 13:48:59.812738 1220278 client.go:171] LocalClient.Create took 9.237730526s
	I1114 13:48:59.812761 1220278 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-814110" took 9.237810526s
	I1114 13:48:59.812771 1220278 start.go:300] post-start starting for "ingress-addon-legacy-814110" (driver="docker")
	I1114 13:48:59.812793 1220278 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 13:48:59.812862 1220278 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 13:48:59.812935 1220278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:48:59.831211 1220278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa Username:docker}
	I1114 13:48:59.931393 1220278 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 13:48:59.935474 1220278 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1114 13:48:59.935513 1220278 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1114 13:48:59.935525 1220278 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1114 13:48:59.935533 1220278 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1114 13:48:59.935544 1220278 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1186318/.minikube/addons for local assets ...
	I1114 13:48:59.935601 1220278 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1186318/.minikube/files for local assets ...
	I1114 13:48:59.935693 1220278 filesync.go:149] local asset: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem -> 11916902.pem in /etc/ssl/certs
	I1114 13:48:59.935705 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem -> /etc/ssl/certs/11916902.pem
	I1114 13:48:59.935815 1220278 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 13:48:59.945912 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem --> /etc/ssl/certs/11916902.pem (1708 bytes)
	I1114 13:48:59.974335 1220278 start.go:303] post-start completed in 161.549177ms
	I1114 13:48:59.974693 1220278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-814110
	I1114 13:48:59.994163 1220278 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/config.json ...
	I1114 13:48:59.994429 1220278 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 13:48:59.994469 1220278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:49:00.017149 1220278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa Username:docker}
	I1114 13:49:00.159810 1220278 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1114 13:49:00.166368 1220278 start.go:128] duration metric: createHost completed in 9.594170348s
	I1114 13:49:00.166398 1220278 start.go:83] releasing machines lock for "ingress-addon-legacy-814110", held for 9.594302294s
	I1114 13:49:00.166487 1220278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-814110
	I1114 13:49:00.186508 1220278 ssh_runner.go:195] Run: cat /version.json
	I1114 13:49:00.186559 1220278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:49:00.187030 1220278 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 13:49:00.187113 1220278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:49:00.208217 1220278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa Username:docker}
	I1114 13:49:00.209930 1220278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa Username:docker}
	I1114 13:49:00.447439 1220278 ssh_runner.go:195] Run: systemctl --version
	I1114 13:49:00.453064 1220278 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 13:49:00.598986 1220278 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1114 13:49:00.604643 1220278 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 13:49:00.628139 1220278 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1114 13:49:00.628233 1220278 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 13:49:00.668346 1220278 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1114 13:49:00.668369 1220278 start.go:472] detecting cgroup driver to use...
	I1114 13:49:00.668401 1220278 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1114 13:49:00.668454 1220278 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 13:49:00.687681 1220278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 13:49:00.701225 1220278 docker.go:203] disabling cri-docker service (if available) ...
	I1114 13:49:00.701291 1220278 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 13:49:00.718209 1220278 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 13:49:00.735179 1220278 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 13:49:00.831178 1220278 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 13:49:00.940177 1220278 docker.go:219] disabling docker service ...
	I1114 13:49:00.940248 1220278 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 13:49:00.962065 1220278 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 13:49:00.976509 1220278 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 13:49:01.079979 1220278 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 13:49:01.179055 1220278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 13:49:01.193605 1220278 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 13:49:01.215098 1220278 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1114 13:49:01.215173 1220278 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 13:49:01.227266 1220278 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 13:49:01.227343 1220278 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 13:49:01.239746 1220278 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 13:49:01.251847 1220278 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 13:49:01.263913 1220278 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 13:49:01.276506 1220278 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 13:49:01.287691 1220278 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 13:49:01.297957 1220278 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 13:49:01.388403 1220278 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 13:49:01.508184 1220278 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 13:49:01.508268 1220278 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 13:49:01.513028 1220278 start.go:540] Will wait 60s for crictl version
	I1114 13:49:01.513092 1220278 ssh_runner.go:195] Run: which crictl
	I1114 13:49:01.517480 1220278 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 13:49:01.557641 1220278 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1114 13:49:01.557729 1220278 ssh_runner.go:195] Run: crio --version
	I1114 13:49:01.603554 1220278 ssh_runner.go:195] Run: crio --version
	I1114 13:49:01.649431 1220278 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I1114 13:49:01.651195 1220278 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-814110 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1114 13:49:01.669189 1220278 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1114 13:49:01.674102 1220278 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 13:49:01.687448 1220278 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1114 13:49:01.687520 1220278 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 13:49:01.741636 1220278 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1114 13:49:01.741712 1220278 ssh_runner.go:195] Run: which lz4
	I1114 13:49:01.746209 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1114 13:49:01.746303 1220278 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1114 13:49:01.750850 1220278 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1114 13:49:01.750888 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (489766197 bytes)
	I1114 13:49:03.850909 1220278 crio.go:444] Took 2.104634 seconds to copy over tarball
	I1114 13:49:03.850990 1220278 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1114 13:49:06.743110 1220278 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.89208964s)
	I1114 13:49:06.743136 1220278 crio.go:451] Took 2.892203 seconds to extract the tarball
	I1114 13:49:06.743146 1220278 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1114 13:49:06.830706 1220278 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 13:49:06.873060 1220278 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1114 13:49:06.873087 1220278 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1114 13:49:06.873165 1220278 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 13:49:06.873354 1220278 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1114 13:49:06.873494 1220278 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1114 13:49:06.873577 1220278 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1114 13:49:06.873647 1220278 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1114 13:49:06.873727 1220278 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1114 13:49:06.873797 1220278 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1114 13:49:06.873870 1220278 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1114 13:49:06.876718 1220278 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1114 13:49:06.877232 1220278 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1114 13:49:06.877263 1220278 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1114 13:49:06.877338 1220278 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1114 13:49:06.877428 1220278 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1114 13:49:06.877513 1220278 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1114 13:49:06.877556 1220278 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 13:49:06.877599 1220278 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	W1114 13:49:07.229169 1220278 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1114 13:49:07.229510 1220278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W1114 13:49:07.238709 1220278 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1114 13:49:07.238889 1220278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	W1114 13:49:07.247280 1220278 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1114 13:49:07.247527 1220278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1114 13:49:07.262067 1220278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	W1114 13:49:07.274945 1220278 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1114 13:49:07.275127 1220278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W1114 13:49:07.288947 1220278 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I1114 13:49:07.289175 1220278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1114 13:49:07.318275 1220278 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1114 13:49:07.318367 1220278 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1114 13:49:07.318447 1220278 ssh_runner.go:195] Run: which crictl
	W1114 13:49:07.333558 1220278 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1114 13:49:07.333752 1220278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1114 13:49:07.392433 1220278 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1114 13:49:07.392478 1220278 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1114 13:49:07.392531 1220278 ssh_runner.go:195] Run: which crictl
	W1114 13:49:07.405246 1220278 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1114 13:49:07.405418 1220278 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 13:49:07.464582 1220278 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1114 13:49:07.464650 1220278 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1114 13:49:07.464710 1220278 ssh_runner.go:195] Run: which crictl
	I1114 13:49:07.464808 1220278 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1114 13:49:07.464825 1220278 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1114 13:49:07.464847 1220278 ssh_runner.go:195] Run: which crictl
	I1114 13:49:07.464926 1220278 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1114 13:49:07.464947 1220278 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1114 13:49:07.464980 1220278 ssh_runner.go:195] Run: which crictl
	I1114 13:49:07.500619 1220278 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1114 13:49:07.500673 1220278 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1114 13:49:07.500724 1220278 ssh_runner.go:195] Run: which crictl
	I1114 13:49:07.500727 1220278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1114 13:49:07.500847 1220278 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1114 13:49:07.500866 1220278 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1114 13:49:07.500937 1220278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1114 13:49:07.501003 1220278 ssh_runner.go:195] Run: which crictl
	I1114 13:49:07.638271 1220278 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1114 13:49:07.638321 1220278 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 13:49:07.638369 1220278 ssh_runner.go:195] Run: which crictl
	I1114 13:49:07.638514 1220278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1114 13:49:07.638574 1220278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1114 13:49:07.638620 1220278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1114 13:49:07.638715 1220278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1114 13:49:07.638730 1220278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1114 13:49:07.638803 1220278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1114 13:49:07.638772 1220278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1114 13:49:07.751682 1220278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1114 13:49:07.751799 1220278 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 13:49:07.751899 1220278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1114 13:49:07.766106 1220278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1114 13:49:07.766224 1220278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1114 13:49:07.766301 1220278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1114 13:49:07.838701 1220278 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1114 13:49:07.838782 1220278 cache_images.go:92] LoadImages completed in 965.674802ms
	W1114 13:49:07.838842 1220278 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I1114 13:49:07.838914 1220278 ssh_runner.go:195] Run: crio config
	I1114 13:49:07.894531 1220278 cni.go:84] Creating CNI manager for ""
	I1114 13:49:07.894574 1220278 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1114 13:49:07.894604 1220278 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 13:49:07.894664 1220278 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-814110 NodeName:ingress-addon-legacy-814110 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1114 13:49:07.894798 1220278 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-814110"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 13:49:07.894885 1220278 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-814110 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-814110 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 13:49:07.894953 1220278 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1114 13:49:07.905621 1220278 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 13:49:07.905694 1220278 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 13:49:07.916022 1220278 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I1114 13:49:07.937025 1220278 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1114 13:49:07.958227 1220278 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1114 13:49:07.979520 1220278 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1114 13:49:07.983965 1220278 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 13:49:07.997334 1220278 certs.go:56] Setting up /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110 for IP: 192.168.49.2
	I1114 13:49:07.997377 1220278 certs.go:190] acquiring lock for shared ca certs: {Name:mk1fdfc415c611904fd8e5ce757e79f4579c67a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:49:07.997520 1220278 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.key
	I1114 13:49:07.997564 1220278 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.key
	I1114 13:49:07.997618 1220278 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.key
	I1114 13:49:07.997634 1220278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt with IP's: []
	I1114 13:49:08.360921 1220278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt ...
	I1114 13:49:08.360952 1220278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: {Name:mk6aaf376189cad2f2c54a2b6881f0f572fb195a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:49:08.361153 1220278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.key ...
	I1114 13:49:08.361167 1220278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.key: {Name:mk5399a276929781a659189f8c0cd95f19bc1b96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:49:08.361261 1220278 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.key.dd3b5fb2
	I1114 13:49:08.361283 1220278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1114 13:49:08.698870 1220278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.crt.dd3b5fb2 ...
	I1114 13:49:08.698902 1220278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.crt.dd3b5fb2: {Name:mk334d21a70de7c95384406f57c5243244717ac1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:49:08.699081 1220278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.key.dd3b5fb2 ...
	I1114 13:49:08.699096 1220278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.key.dd3b5fb2: {Name:mkaf1c9b70dc70e5b6a7838ce04b084c33d9b760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:49:08.699178 1220278 certs.go:337] copying /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.crt
	I1114 13:49:08.699251 1220278 certs.go:341] copying /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.key
	I1114 13:49:08.699307 1220278 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/proxy-client.key
	I1114 13:49:08.699326 1220278 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/proxy-client.crt with IP's: []
	I1114 13:49:09.420809 1220278 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/proxy-client.crt ...
	I1114 13:49:09.420847 1220278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/proxy-client.crt: {Name:mk737756b64470a7dceadb60be43ad81273a87d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:49:09.421026 1220278 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/proxy-client.key ...
	I1114 13:49:09.421040 1220278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/proxy-client.key: {Name:mk1d86c049a616a7ff17b9b6b5b2d42ca4cc6afb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:49:09.421126 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1114 13:49:09.421146 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1114 13:49:09.421161 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1114 13:49:09.421177 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1114 13:49:09.421189 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1114 13:49:09.421205 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1114 13:49:09.421219 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1114 13:49:09.421236 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1114 13:49:09.421286 1220278 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/1191690.pem (1338 bytes)
	W1114 13:49:09.421334 1220278 certs.go:433] ignoring /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/1191690_empty.pem, impossibly tiny 0 bytes
	I1114 13:49:09.421355 1220278 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 13:49:09.421385 1220278 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem (1082 bytes)
	I1114 13:49:09.421416 1220278 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem (1123 bytes)
	I1114 13:49:09.421442 1220278 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem (1675 bytes)
	I1114 13:49:09.421492 1220278 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem (1708 bytes)
	I1114 13:49:09.421528 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:49:09.421543 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/1191690.pem -> /usr/share/ca-certificates/1191690.pem
	I1114 13:49:09.421556 1220278 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem -> /usr/share/ca-certificates/11916902.pem
	I1114 13:49:09.422158 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 13:49:09.450584 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1114 13:49:09.479444 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 13:49:09.508035 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1114 13:49:09.535751 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 13:49:09.564139 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 13:49:09.591183 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 13:49:09.619976 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1114 13:49:09.648899 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 13:49:09.677721 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/1191690.pem --> /usr/share/ca-certificates/1191690.pem (1338 bytes)
	I1114 13:49:09.706386 1220278 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem --> /usr/share/ca-certificates/11916902.pem (1708 bytes)
	I1114 13:49:09.735130 1220278 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 13:49:09.756758 1220278 ssh_runner.go:195] Run: openssl version
	I1114 13:49:09.765087 1220278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 13:49:09.778185 1220278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:49:09.783278 1220278 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 13:34 /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:49:09.783363 1220278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 13:49:09.794375 1220278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 13:49:09.806982 1220278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1191690.pem && ln -fs /usr/share/ca-certificates/1191690.pem /etc/ssl/certs/1191690.pem"
	I1114 13:49:09.818434 1220278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1191690.pem
	I1114 13:49:09.823781 1220278 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 13:42 /usr/share/ca-certificates/1191690.pem
	I1114 13:49:09.823853 1220278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1191690.pem
	I1114 13:49:09.833064 1220278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1191690.pem /etc/ssl/certs/51391683.0"
	I1114 13:49:09.845304 1220278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11916902.pem && ln -fs /usr/share/ca-certificates/11916902.pem /etc/ssl/certs/11916902.pem"
	I1114 13:49:09.857837 1220278 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11916902.pem
	I1114 13:49:09.862711 1220278 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 13:42 /usr/share/ca-certificates/11916902.pem
	I1114 13:49:09.862776 1220278 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11916902.pem
	I1114 13:49:09.871581 1220278 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11916902.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 13:49:09.883105 1220278 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 13:49:09.887405 1220278 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 13:49:09.887468 1220278 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-814110 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-814110 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:49:09.887542 1220278 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 13:49:09.887604 1220278 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 13:49:09.928978 1220278 cri.go:89] found id: ""
	I1114 13:49:09.929056 1220278 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 13:49:09.940170 1220278 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 13:49:09.951547 1220278 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1114 13:49:09.951642 1220278 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 13:49:09.962910 1220278 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 13:49:09.962968 1220278 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1114 13:49:10.023165 1220278 kubeadm.go:322] W1114 13:49:10.022631    1229 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1114 13:49:10.077494 1220278 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1114 13:49:10.168024 1220278 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 13:49:17.901095 1220278 kubeadm.go:322] W1114 13:49:17.899123    1229 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1114 13:49:17.901562 1220278 kubeadm.go:322] W1114 13:49:17.900877    1229 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1114 13:49:32.903401 1220278 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1114 13:49:32.903455 1220278 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 13:49:32.903537 1220278 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1114 13:49:32.903588 1220278 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1114 13:49:32.903632 1220278 kubeadm.go:322] OS: Linux
	I1114 13:49:32.903674 1220278 kubeadm.go:322] CGROUPS_CPU: enabled
	I1114 13:49:32.903719 1220278 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1114 13:49:32.903763 1220278 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1114 13:49:32.903822 1220278 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1114 13:49:32.903868 1220278 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1114 13:49:32.903912 1220278 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1114 13:49:32.903979 1220278 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 13:49:32.904066 1220278 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 13:49:32.904151 1220278 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 13:49:32.904246 1220278 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 13:49:32.904324 1220278 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 13:49:32.904361 1220278 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 13:49:32.904426 1220278 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 13:49:32.906744 1220278 out.go:204]   - Generating certificates and keys ...
	I1114 13:49:32.906826 1220278 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 13:49:32.906895 1220278 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 13:49:32.906968 1220278 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1114 13:49:32.907028 1220278 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1114 13:49:32.907087 1220278 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1114 13:49:32.907138 1220278 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1114 13:49:32.907193 1220278 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1114 13:49:32.907320 1220278 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-814110 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1114 13:49:32.907373 1220278 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1114 13:49:32.907511 1220278 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-814110 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1114 13:49:32.907577 1220278 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1114 13:49:32.907648 1220278 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1114 13:49:32.907701 1220278 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1114 13:49:32.907763 1220278 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 13:49:32.907819 1220278 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 13:49:32.907872 1220278 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 13:49:32.907938 1220278 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 13:49:32.907992 1220278 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 13:49:32.908056 1220278 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 13:49:32.911112 1220278 out.go:204]   - Booting up control plane ...
	I1114 13:49:32.911286 1220278 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 13:49:32.911366 1220278 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 13:49:32.911433 1220278 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 13:49:32.911514 1220278 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 13:49:32.911687 1220278 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 13:49:32.911765 1220278 kubeadm.go:322] [apiclient] All control plane components are healthy after 13.502366 seconds
	I1114 13:49:32.911870 1220278 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 13:49:32.911998 1220278 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 13:49:32.912056 1220278 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 13:49:32.912189 1220278 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-814110 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1114 13:49:32.912245 1220278 kubeadm.go:322] [bootstrap-token] Using token: exlcvv.ck94ijhdfq1109uo
	I1114 13:49:32.914773 1220278 out.go:204]   - Configuring RBAC rules ...
	I1114 13:49:32.914901 1220278 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 13:49:32.914986 1220278 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 13:49:32.915125 1220278 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 13:49:32.915251 1220278 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 13:49:32.915365 1220278 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 13:49:32.915449 1220278 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 13:49:32.915563 1220278 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 13:49:32.915631 1220278 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 13:49:32.915677 1220278 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 13:49:32.915681 1220278 kubeadm.go:322] 
	I1114 13:49:32.915741 1220278 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 13:49:32.915746 1220278 kubeadm.go:322] 
	I1114 13:49:32.915822 1220278 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 13:49:32.915826 1220278 kubeadm.go:322] 
	I1114 13:49:32.915851 1220278 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 13:49:32.915910 1220278 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 13:49:32.915960 1220278 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 13:49:32.915965 1220278 kubeadm.go:322] 
	I1114 13:49:32.916020 1220278 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 13:49:32.916094 1220278 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 13:49:32.916162 1220278 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 13:49:32.916166 1220278 kubeadm.go:322] 
	I1114 13:49:32.916249 1220278 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 13:49:32.916327 1220278 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 13:49:32.916332 1220278 kubeadm.go:322] 
	I1114 13:49:32.916418 1220278 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token exlcvv.ck94ijhdfq1109uo \
	I1114 13:49:32.916523 1220278 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:1a1b25420be6487c50639ce0b981e16ee30b54e658d487c3adf6952ff2c4a2c6 \
	I1114 13:49:32.916638 1220278 kubeadm.go:322]     --control-plane 
	I1114 13:49:32.916666 1220278 kubeadm.go:322] 
	I1114 13:49:32.916789 1220278 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 13:49:32.916812 1220278 kubeadm.go:322] 
	I1114 13:49:32.916926 1220278 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token exlcvv.ck94ijhdfq1109uo \
	I1114 13:49:32.917084 1220278 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:1a1b25420be6487c50639ce0b981e16ee30b54e658d487c3adf6952ff2c4a2c6 
	I1114 13:49:32.917107 1220278 cni.go:84] Creating CNI manager for ""
	I1114 13:49:32.917115 1220278 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1114 13:49:32.919892 1220278 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1114 13:49:32.922458 1220278 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1114 13:49:32.927556 1220278 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I1114 13:49:32.927578 1220278 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1114 13:49:32.950447 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1114 13:49:33.427430 1220278 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 13:49:33.427514 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:33.427546 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6d8573efb5a7770e21024de23a29d810b200278b minikube.k8s.io/name=ingress-addon-legacy-814110 minikube.k8s.io/updated_at=2023_11_14T13_49_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:33.458632 1220278 ops.go:34] apiserver oom_adj: -16
	I1114 13:49:33.586349 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:33.684982 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:34.278607 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:34.778724 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:35.278863 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:35.778067 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:36.278990 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:36.778921 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:37.278975 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:37.778063 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:38.278612 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:38.779022 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:39.278652 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:39.778830 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:40.278772 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:40.778111 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:41.278293 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:41.778198 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:42.278792 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:42.778701 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:43.278858 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:43.777989 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:44.279014 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:44.778708 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:45.278738 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:45.778570 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:46.278587 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:46.778092 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:47.278904 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:47.778593 1220278 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 13:49:47.881867 1220278 kubeadm.go:1081] duration metric: took 14.454427426s to wait for elevateKubeSystemPrivileges.
	I1114 13:49:47.881897 1220278 kubeadm.go:406] StartCluster complete in 37.994434447s
	I1114 13:49:47.881913 1220278 settings.go:142] acquiring lock: {Name:mk8b1f62ebfea123b4e39d0037f993206e354b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:49:47.881985 1220278 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 13:49:47.882665 1220278 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/kubeconfig: {Name:mkf1191f735848932fc7f3417e1088220acbc478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 13:49:47.883350 1220278 kapi.go:59] client config for ingress-addon-legacy-814110: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.key", CAFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 13:49:47.884764 1220278 config.go:182] Loaded profile config "ingress-addon-legacy-814110": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1114 13:49:47.884785 1220278 cert_rotation.go:137] Starting client certificate rotation controller
	I1114 13:49:47.884731 1220278 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 13:49:47.884822 1220278 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-814110"
	I1114 13:49:47.884836 1220278 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-814110"
	I1114 13:49:47.884889 1220278 host.go:66] Checking if "ingress-addon-legacy-814110" exists ...
	I1114 13:49:47.885172 1220278 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 13:49:47.885294 1220278 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-814110"
	I1114 13:49:47.885308 1220278 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-814110"
	I1114 13:49:47.885362 1220278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-814110 --format={{.State.Status}}
	I1114 13:49:47.885630 1220278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-814110 --format={{.State.Status}}
	I1114 13:49:47.928499 1220278 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 13:49:47.930706 1220278 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 13:49:47.930726 1220278 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 13:49:47.930793 1220278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:49:47.940123 1220278 kapi.go:59] client config for ingress-addon-legacy-814110: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.key", CAFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 13:49:47.940381 1220278 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-814110"
	I1114 13:49:47.940407 1220278 host.go:66] Checking if "ingress-addon-legacy-814110" exists ...
	I1114 13:49:47.940885 1220278 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-814110 --format={{.State.Status}}
	I1114 13:49:47.971286 1220278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa Username:docker}
	I1114 13:49:47.991569 1220278 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 13:49:47.991594 1220278 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 13:49:47.991671 1220278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-814110
	I1114 13:49:48.021194 1220278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34294 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/ingress-addon-legacy-814110/id_rsa Username:docker}
	I1114 13:49:48.117098 1220278 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-814110" context rescaled to 1 replicas
	I1114 13:49:48.117188 1220278 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 13:49:48.129051 1220278 out.go:177] * Verifying Kubernetes components...
	I1114 13:49:48.131539 1220278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 13:49:48.147568 1220278 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 13:49:48.158552 1220278 kapi.go:59] client config for ingress-addon-legacy-814110: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.key", CAFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[
]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 13:49:48.158891 1220278 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-814110" to be "Ready" ...
	I1114 13:49:48.170957 1220278 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 13:49:48.212636 1220278 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 13:49:48.621800 1220278 start.go:926] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1114 13:49:48.831439 1220278 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1114 13:49:48.833408 1220278 addons.go:502] enable addons completed in 948.660718ms: enabled=[default-storageclass storage-provisioner]
	I1114 13:49:50.448814 1220278 node_ready.go:58] node "ingress-addon-legacy-814110" has status "Ready":"False"
	I1114 13:49:52.449098 1220278 node_ready.go:58] node "ingress-addon-legacy-814110" has status "Ready":"False"
	I1114 13:49:54.948626 1220278 node_ready.go:58] node "ingress-addon-legacy-814110" has status "Ready":"False"
	I1114 13:49:56.449517 1220278 node_ready.go:49] node "ingress-addon-legacy-814110" has status "Ready":"True"
	I1114 13:49:56.449545 1220278 node_ready.go:38] duration metric: took 8.290608275s waiting for node "ingress-addon-legacy-814110" to be "Ready" ...
	I1114 13:49:56.449556 1220278 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 13:49:56.457117 1220278 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-8k4sx" in "kube-system" namespace to be "Ready" ...
	I1114 13:49:58.465393 1220278 pod_ready.go:102] pod "coredns-66bff467f8-8k4sx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-14 13:49:48 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1114 13:50:00.965042 1220278 pod_ready.go:102] pod "coredns-66bff467f8-8k4sx" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-11-14 13:49:48 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1114 13:50:02.968053 1220278 pod_ready.go:102] pod "coredns-66bff467f8-8k4sx" in "kube-system" namespace has status "Ready":"False"
	I1114 13:50:03.967919 1220278 pod_ready.go:92] pod "coredns-66bff467f8-8k4sx" in "kube-system" namespace has status "Ready":"True"
	I1114 13:50:03.967946 1220278 pod_ready.go:81] duration metric: took 7.510794017s waiting for pod "coredns-66bff467f8-8k4sx" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:03.967962 1220278 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-814110" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:03.972405 1220278 pod_ready.go:92] pod "etcd-ingress-addon-legacy-814110" in "kube-system" namespace has status "Ready":"True"
	I1114 13:50:03.972434 1220278 pod_ready.go:81] duration metric: took 4.464733ms waiting for pod "etcd-ingress-addon-legacy-814110" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:03.972449 1220278 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-814110" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:03.977035 1220278 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-814110" in "kube-system" namespace has status "Ready":"True"
	I1114 13:50:03.977063 1220278 pod_ready.go:81] duration metric: took 4.606238ms waiting for pod "kube-apiserver-ingress-addon-legacy-814110" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:03.977076 1220278 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-814110" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:03.981687 1220278 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-814110" in "kube-system" namespace has status "Ready":"True"
	I1114 13:50:03.981710 1220278 pod_ready.go:81] duration metric: took 4.624232ms waiting for pod "kube-controller-manager-ingress-addon-legacy-814110" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:03.981722 1220278 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n98c2" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:03.986360 1220278 pod_ready.go:92] pod "kube-proxy-n98c2" in "kube-system" namespace has status "Ready":"True"
	I1114 13:50:03.986390 1220278 pod_ready.go:81] duration metric: took 4.660064ms waiting for pod "kube-proxy-n98c2" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:03.986406 1220278 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-814110" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:04.162778 1220278 request.go:629] Waited for 176.251358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-814110
	I1114 13:50:04.362701 1220278 request.go:629] Waited for 197.262807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-814110
	I1114 13:50:04.365587 1220278 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-814110" in "kube-system" namespace has status "Ready":"True"
	I1114 13:50:04.365638 1220278 pod_ready.go:81] duration metric: took 379.223782ms waiting for pod "kube-scheduler-ingress-addon-legacy-814110" in "kube-system" namespace to be "Ready" ...
	I1114 13:50:04.365655 1220278 pod_ready.go:38] duration metric: took 7.916080406s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 13:50:04.365669 1220278 api_server.go:52] waiting for apiserver process to appear ...
	I1114 13:50:04.365745 1220278 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 13:50:04.378883 1220278 api_server.go:72] duration metric: took 16.261648378s to wait for apiserver process to appear ...
	I1114 13:50:04.378914 1220278 api_server.go:88] waiting for apiserver healthz status ...
	I1114 13:50:04.378933 1220278 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1114 13:50:04.387887 1220278 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1114 13:50:04.388860 1220278 api_server.go:141] control plane version: v1.18.20
	I1114 13:50:04.388885 1220278 api_server.go:131] duration metric: took 9.964555ms to wait for apiserver health ...
	I1114 13:50:04.388893 1220278 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 13:50:04.563188 1220278 request.go:629] Waited for 174.219528ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1114 13:50:04.569298 1220278 system_pods.go:59] 8 kube-system pods found
	I1114 13:50:04.569336 1220278 system_pods.go:61] "coredns-66bff467f8-8k4sx" [9f56ab7a-445a-4d18-9860-986f9b7ddbb0] Running
	I1114 13:50:04.569344 1220278 system_pods.go:61] "etcd-ingress-addon-legacy-814110" [e0bff927-a819-46ca-b5cd-0c502d41e1a1] Running
	I1114 13:50:04.569349 1220278 system_pods.go:61] "kindnet-66n2z" [6e8db226-b9d6-49cd-af22-fcb350c5de74] Running
	I1114 13:50:04.569354 1220278 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-814110" [79eff367-6adb-4c1d-acf5-b40295308f88] Running
	I1114 13:50:04.569359 1220278 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-814110" [c2e8342d-304f-4c20-a39c-54db23a2ee6d] Running
	I1114 13:50:04.569364 1220278 system_pods.go:61] "kube-proxy-n98c2" [efab5402-60be-4b66-b02e-7954cd10b4a2] Running
	I1114 13:50:04.569369 1220278 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-814110" [b4679785-d0f3-41dc-a0a4-1b4ff8e78906] Running
	I1114 13:50:04.569374 1220278 system_pods.go:61] "storage-provisioner" [f869d2a3-1807-444a-9049-9298cc449066] Running
	I1114 13:50:04.569381 1220278 system_pods.go:74] duration metric: took 180.481312ms to wait for pod list to return data ...
	I1114 13:50:04.569393 1220278 default_sa.go:34] waiting for default service account to be created ...
	I1114 13:50:04.762767 1220278 request.go:629] Waited for 193.264122ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I1114 13:50:04.765421 1220278 default_sa.go:45] found service account: "default"
	I1114 13:50:04.765462 1220278 default_sa.go:55] duration metric: took 196.056153ms for default service account to be created ...
	I1114 13:50:04.765475 1220278 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 13:50:04.962712 1220278 request.go:629] Waited for 197.17306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I1114 13:50:04.968582 1220278 system_pods.go:86] 8 kube-system pods found
	I1114 13:50:04.968617 1220278 system_pods.go:89] "coredns-66bff467f8-8k4sx" [9f56ab7a-445a-4d18-9860-986f9b7ddbb0] Running
	I1114 13:50:04.968629 1220278 system_pods.go:89] "etcd-ingress-addon-legacy-814110" [e0bff927-a819-46ca-b5cd-0c502d41e1a1] Running
	I1114 13:50:04.968634 1220278 system_pods.go:89] "kindnet-66n2z" [6e8db226-b9d6-49cd-af22-fcb350c5de74] Running
	I1114 13:50:04.968639 1220278 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-814110" [79eff367-6adb-4c1d-acf5-b40295308f88] Running
	I1114 13:50:04.968645 1220278 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-814110" [c2e8342d-304f-4c20-a39c-54db23a2ee6d] Running
	I1114 13:50:04.968649 1220278 system_pods.go:89] "kube-proxy-n98c2" [efab5402-60be-4b66-b02e-7954cd10b4a2] Running
	I1114 13:50:04.968655 1220278 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-814110" [b4679785-d0f3-41dc-a0a4-1b4ff8e78906] Running
	I1114 13:50:04.968659 1220278 system_pods.go:89] "storage-provisioner" [f869d2a3-1807-444a-9049-9298cc449066] Running
	I1114 13:50:04.968667 1220278 system_pods.go:126] duration metric: took 203.185698ms to wait for k8s-apps to be running ...
	I1114 13:50:04.968674 1220278 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 13:50:04.968735 1220278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 13:50:04.982635 1220278 system_svc.go:56] duration metric: took 13.949817ms WaitForService to wait for kubelet.
	I1114 13:50:04.982666 1220278 kubeadm.go:581] duration metric: took 16.865437278s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 13:50:04.982687 1220278 node_conditions.go:102] verifying NodePressure condition ...
	I1114 13:50:05.163071 1220278 request.go:629] Waited for 180.315201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I1114 13:50:05.165920 1220278 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1114 13:50:05.165957 1220278 node_conditions.go:123] node cpu capacity is 2
	I1114 13:50:05.165970 1220278 node_conditions.go:105] duration metric: took 183.277495ms to run NodePressure ...
	I1114 13:50:05.165983 1220278 start.go:228] waiting for startup goroutines ...
	I1114 13:50:05.165990 1220278 start.go:233] waiting for cluster config update ...
	I1114 13:50:05.165999 1220278 start.go:242] writing updated cluster config ...
	I1114 13:50:05.166282 1220278 ssh_runner.go:195] Run: rm -f paused
	I1114 13:50:05.227844 1220278 start.go:600] kubectl: 1.28.3, cluster: 1.18.20 (minor skew: 10)
	I1114 13:50:05.230551 1220278 out.go:177] 
	W1114 13:50:05.232321 1220278 out.go:239] ! /usr/local/bin/kubectl is version 1.28.3, which may have incompatibilities with Kubernetes 1.18.20.
	I1114 13:50:05.234090 1220278 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1114 13:50:05.236088 1220278 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-814110" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 14 13:56:22 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:56:22.238901827Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=58fd99ca-3e15-4f56-8f47-c96415f83a1c name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:56:25 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:56:25.748938568Z" level=info msg="Pulling image: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=8d4b7c1a-0349-4063-8fcc-eaf903b316d3 name=/runtime.v1alpha2.ImageService/PullImage
	Nov 14 13:56:25 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:56:25.751138587Z" level=info msg="Trying to access \"docker.io/jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 14 13:56:34 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:56:34.238816996Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=5ca1f1b5-88bb-4ced-b2a6-88f90b07b62f name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:56:36 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:56:36.239242474Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=775f77f0-7aeb-4e64-aff0-83c63970d430 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:56:36 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:56:36.239542534Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=775f77f0-7aeb-4e64-aff0-83c63970d430 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:56:46 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:56:46.238961983Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=dcebd530-aae1-4392-a094-bca79aa5cd5e name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:56:51 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:56:51.238687038Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=06dd9d37-f843-404b-938b-1d35be13fe26 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:56:51 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:56:51.238972788Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=06dd9d37-f843-404b-938b-1d35be13fe26 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:56:57 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:56:57.238710350Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=563fd7c1-7a6e-4de1-a684-a1772f09da1f name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:57:06 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:57:06.239124399Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=095fb9c0-ae20-4b87-a311-d35282065ef0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:57:06 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:57:06.239422554Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=095fb9c0-ae20-4b87-a311-d35282065ef0 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:57:07 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:57:07.238906257Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=61e060b6-5fb5-45eb-b661-5aebaa5bca20 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:57:07 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:57:07.239180035Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=61e060b6-5fb5-45eb-b661-5aebaa5bca20 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:57:10 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:57:10.238775912Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=98503596-a2f2-442a-b453-4c2718f2c576 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:57:18 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:57:18.238896382Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=61380d6d-50d2-42b9-9005-8d6df8e5b426 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:57:18 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:57:18.239175691Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=61380d6d-50d2-42b9-9005-8d6df8e5b426 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:57:22 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:57:22.238821638Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=42000f8c-f941-4aea-8279-a4df676a29f1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:57:22 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:57:22.239119802Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=42000f8c-f941-4aea-8279-a4df676a29f1 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:57:23 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:57:23.238729465Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=f27b122b-5d7c-4ffc-918c-cff00f420273 name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:57:29 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:57:29.238706492Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=95f48bc4-d219-45ff-88b7-a946097bdeee name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:57:29 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:57:29.238994604Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=95f48bc4-d219-45ff-88b7-a946097bdeee name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:57:33 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:57:33.238803370Z" level=info msg="Checking image status: docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" id=f171c384-77d5-444e-a0f7-42f9c7d46e3b name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:57:33 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:57:33.239091491Z" level=info msg="Image docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 not found" id=f171c384-77d5-444e-a0f7-42f9c7d46e3b name=/runtime.v1alpha2.ImageService/ImageStatus
	Nov 14 13:57:38 ingress-addon-legacy-814110 crio[898]: time="2023-11-14 13:57:38.238870657Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=f03a90e4-c21a-483f-a9aa-2e1212e5dce2 name=/runtime.v1alpha2.ImageService/ImageStatus
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e2dfea6db0038       6e17ba78cf3ebe1410fe828dc4ca57d3df37ad0b3c1a64161e5c27d57a24d184                                                  7 minutes ago       Running             coredns                   0                   16e72ca153e72       coredns-66bff467f8-8k4sx
	2cf9960ed4483       gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2   7 minutes ago       Running             storage-provisioner       0                   7eb3670aa5439       storage-provisioner
	1233093423d4d       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                7 minutes ago       Running             kindnet-cni               0                   4eba235f78dc7       kindnet-66n2z
	3ff47c9dd0749       565297bc6f7d41fdb7a8ac7f9d75617ef4e6efdd1b1e41af6e060e19c44c28a8                                                  7 minutes ago       Running             kube-proxy                0                   80b0c62f16b2f       kube-proxy-n98c2
	4e5d19b2f0e82       68a4fac29a865f21217550dbd3570dc1adbc602cf05d6eeb6f060eec1359e1f1                                                  8 minutes ago       Running             kube-controller-manager   0                   d5943c1440d01       kube-controller-manager-ingress-addon-legacy-814110
	c3514bf1a6e6b       2694cf044d66591c37b12c60ce1f1cdba3d271af5ebda43a2e4d32ebbadd97d0                                                  8 minutes ago       Running             kube-apiserver            0                   6e1fc3b419df2       kube-apiserver-ingress-addon-legacy-814110
	1e9198b4f97a6       ab707b0a0ea339254cc6e3f2e7d618d4793d5129acb2288e9194769271404952                                                  8 minutes ago       Running             etcd                      0                   4c9e140679323       etcd-ingress-addon-legacy-814110
	92ecb93026e45       095f37015706de6eedb4f57eb2f9a25a1e3bf4bec63d50ba73f8968ef4094fd1                                                  8 minutes ago       Running             kube-scheduler            0                   4d92df1fe9b1c       kube-scheduler-ingress-addon-legacy-814110
	
	* 
	* ==> coredns [e2dfea6db0038b864200055ec6a5d37fcf9105316391feb798764e2953c92119] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 45700869df5177c7f3d9f7a279928a55
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] 127.0.0.1:49151 - 49750 "HINFO IN 5663434194049127381.8240556951991835833. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013653597s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-814110
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-814110
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d8573efb5a7770e21024de23a29d810b200278b
	                    minikube.k8s.io/name=ingress-addon-legacy-814110
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T13_49_33_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 13:49:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-814110
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 13:57:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 13:55:06 +0000   Tue, 14 Nov 2023 13:49:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 13:55:06 +0000   Tue, 14 Nov 2023 13:49:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 13:55:06 +0000   Tue, 14 Nov 2023 13:49:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 13:55:06 +0000   Tue, 14 Nov 2023 13:49:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-814110
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 1871bd87db4d4cf8ac58e66a78d35c50
	  System UUID:                0ea239da-03bc-4af9-a9d0-18b9b6d0d8b9
	  Boot ID:                    3bdb9c53-2d63-44b9-be60-6ff1ad471e35
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  ingress-nginx               ingress-nginx-admission-create-wkbjv                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  ingress-nginx               ingress-nginx-admission-patch-9zb8z                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  ingress-nginx               ingress-nginx-controller-7fcf777cb7-9bsww              100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         7m35s
	  kube-system                 coredns-66bff467f8-8k4sx                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     7m53s
	  kube-system                 etcd-ingress-addon-legacy-814110                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                 kindnet-66n2z                                          100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m53s
	  kube-system                 kube-apiserver-ingress-addon-legacy-814110             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-814110    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                 kube-ingress-dns-minikube                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-proxy-n98c2                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m53s
	  kube-system                 kube-scheduler-ingress-addon-legacy-814110             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             210Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m20s (x5 over 8m20s)  kubelet     Node ingress-addon-legacy-814110 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m20s (x5 over 8m20s)  kubelet     Node ingress-addon-legacy-814110 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m20s (x4 over 8m20s)  kubelet     Node ingress-addon-legacy-814110 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m5s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m5s                   kubelet     Node ingress-addon-legacy-814110 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m5s                   kubelet     Node ingress-addon-legacy-814110 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m5s                   kubelet     Node ingress-addon-legacy-814110 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m53s                  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                7m45s                  kubelet     Node ingress-addon-legacy-814110 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001143] FS-Cache: O-key=[8] '84643b0000000000'
	[  +0.000762] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000999] FS-Cache: N-cookie d=00000000fbc4fe34{9p.inode} n=000000002244812d
	[  +0.001146] FS-Cache: N-key=[8] '84643b0000000000'
	[  +0.003454] FS-Cache: Duplicate cookie detected
	[  +0.000756] FS-Cache: O-cookie c=0000005f [p=0000005d fl=226 nc=0 na=1]
	[  +0.001057] FS-Cache: O-cookie d=00000000fbc4fe34{9p.inode} n=00000000ecb0ec67
	[  +0.001110] FS-Cache: O-key=[8] '84643b0000000000'
	[  +0.000749] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001022] FS-Cache: N-cookie d=00000000fbc4fe34{9p.inode} n=0000000038574f41
	[  +0.001139] FS-Cache: N-key=[8] '84643b0000000000'
	[  +3.132585] FS-Cache: Duplicate cookie detected
	[  +0.000755] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001032] FS-Cache: O-cookie d=00000000fbc4fe34{9p.inode} n=00000000e83a4aa7
	[  +0.001160] FS-Cache: O-key=[8] '83643b0000000000'
	[  +0.000753] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=00000000fbc4fe34{9p.inode} n=000000002244812d
	[  +0.001111] FS-Cache: N-key=[8] '83643b0000000000'
	[  +0.323161] FS-Cache: Duplicate cookie detected
	[  +0.000805] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001104] FS-Cache: O-cookie d=00000000fbc4fe34{9p.inode} n=0000000060b8cdea
	[  +0.001286] FS-Cache: O-key=[8] '89643b0000000000'
	[  +0.000771] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.001023] FS-Cache: N-cookie d=00000000fbc4fe34{9p.inode} n=00000000495e4eb3
	[  +0.001223] FS-Cache: N-key=[8] '89643b0000000000'
	
	* 
	* ==> etcd [1e9198b4f97a6f3d51b839bca467dcfafccfb90dc04c16d668c85ab153a5b7fd] <==
	* 2023-11-14 13:49:24.759294 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-11-14 13:49:24.796679 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/11/14 13:49:24 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-11-14 13:49:24.829319 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-11-14 13:49:24.846494 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-14 13:49:24.944876 I | embed: listening for peers on 192.168.49.2:2380
	2023-11-14 13:49:24.954985 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/11/14 13:49:25 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/11/14 13:49:25 INFO: aec36adc501070cc became candidate at term 2
	raft2023/11/14 13:49:25 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/11/14 13:49:25 INFO: aec36adc501070cc became leader at term 2
	raft2023/11/14 13:49:25 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-11-14 13:49:25.685393 I | etcdserver: published {Name:ingress-addon-legacy-814110 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-11-14 13:49:25.685513 I | embed: ready to serve client requests
	2023-11-14 13:49:25.685751 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-14 13:49:25.686235 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-14 13:49:25.686381 I | etcdserver/api: enabled capabilities for version 3.4
	2023-11-14 13:49:25.686447 I | embed: ready to serve client requests
	2023-11-14 13:49:25.686923 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-14 13:49:25.695626 I | embed: serving client requests on 192.168.49.2:2379
	2023-11-14 13:49:48.351462 W | etcdserver: read-only range request "key:\"/registry/replicasets/kube-system/coredns-66bff467f8\" " with result "range_response_count:1 size:3683" took too long (109.417065ms) to execute
	2023-11-14 13:49:48.351626 W | etcdserver: read-only range request "key:\"/registry/daemonsets/kube-system/kindnet\" " with result "range_response_count:1 size:4688" took too long (134.415983ms) to execute
	2023-11-14 13:49:48.353506 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-814110\" " with result "range_response_count:1 size:6504" took too long (136.819766ms) to execute
	2023-11-14 13:49:48.482373 W | etcdserver: read-only range request "key:\"/registry/minions/ingress-addon-legacy-814110\" " with result "range_response_count:1 size:6504" took too long (142.363123ms) to execute
	2023-11-14 13:49:48.493927 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-proxy-n98c2\" " with result "range_response_count:1 size:3588" took too long (157.528677ms) to execute
	
	* 
	* ==> kernel <==
	*  13:57:41 up 10:40,  0 users,  load average: 0.26, 0.30, 0.76
	Linux ingress-addon-legacy-814110 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [1233093423d4dd2cd51a48a275bd031397a8c8cc3f80caad8be47bdf0ce8d792] <==
	* I1114 13:55:31.862075       1 main.go:227] handling current node
	I1114 13:55:41.872040       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:55:41.872065       1 main.go:227] handling current node
	I1114 13:55:51.877846       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:55:51.877991       1 main.go:227] handling current node
	I1114 13:56:01.885629       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:56:01.885660       1 main.go:227] handling current node
	I1114 13:56:11.894120       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:56:11.894149       1 main.go:227] handling current node
	I1114 13:56:21.897999       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:56:21.898147       1 main.go:227] handling current node
	I1114 13:56:31.902554       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:56:31.902587       1 main.go:227] handling current node
	I1114 13:56:41.907122       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:56:41.907155       1 main.go:227] handling current node
	I1114 13:56:51.910954       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:56:51.910982       1 main.go:227] handling current node
	I1114 13:57:01.914929       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:57:01.914957       1 main.go:227] handling current node
	I1114 13:57:11.918188       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:57:11.918213       1 main.go:227] handling current node
	I1114 13:57:21.925194       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:57:21.925223       1 main.go:227] handling current node
	I1114 13:57:31.937415       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I1114 13:57:31.937445       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [c3514bf1a6e6b88e0d142b54a893762bacd6330d9afa8404a5bf8e09137177a0] <==
	* I1114 13:49:29.658848       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1114 13:49:29.658888       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	E1114 13:49:29.741979       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: 
	I1114 13:49:29.760768       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1114 13:49:29.760915       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1114 13:49:29.779780       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1114 13:49:29.780822       1 cache.go:39] Caches are synced for autoregister controller
	I1114 13:49:29.854094       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1114 13:49:30.650300       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1114 13:49:30.650330       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1114 13:49:30.655781       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1114 13:49:30.659452       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1114 13:49:30.659484       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1114 13:49:31.053430       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1114 13:49:31.105755       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1114 13:49:31.177479       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1114 13:49:31.178534       1 controller.go:609] quota admission added evaluator for: endpoints
	I1114 13:49:31.183955       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1114 13:49:32.048572       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1114 13:49:32.776089       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1114 13:49:32.861246       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1114 13:49:36.165694       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1114 13:49:47.996234       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1114 13:49:48.001970       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1114 13:50:06.175721       1 controller.go:609] quota admission added evaluator for: jobs.batch
	
	* 
	* ==> kube-controller-manager [4e5d19b2f0e8227e0d1d4a26093125a427c7b7538a1409bf2f30f3c0ad038fba] <==
	* I1114 13:49:47.994421       1 shared_informer.go:230] Caches are synced for GC 
	I1114 13:49:47.995352       1 shared_informer.go:230] Caches are synced for HPA 
	I1114 13:49:47.995408       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I1114 13:49:48.030677       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"370b3dc4-b3b0-49f7-b9aa-c2145e8d7601", APIVersion:"apps/v1", ResourceVersion:"210", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	I1114 13:49:48.055192       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1114 13:49:48.055412       1 shared_informer.go:230] Caches are synced for stateful set 
	I1114 13:49:48.055419       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1114 13:49:48.060875       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1114 13:49:48.104050       1 shared_informer.go:230] Caches are synced for resource quota 
	I1114 13:49:48.104615       1 shared_informer.go:230] Caches are synced for disruption 
	I1114 13:49:48.104683       1 disruption.go:339] Sending events to api server.
	I1114 13:49:48.104765       1 shared_informer.go:230] Caches are synced for resource quota 
	I1114 13:49:48.125996       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"0890a42d-76c8-47c6-bbeb-6e12fd4ce104", APIVersion:"apps/v1", ResourceVersion:"335", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-rx7dp
	I1114 13:49:48.126033       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"679a21da-915c-4cf6-8d67-398ff6e38ff7", APIVersion:"apps/v1", ResourceVersion:"234", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-66n2z
	I1114 13:49:48.176058       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"0890a42d-76c8-47c6-bbeb-6e12fd4ce104", APIVersion:"apps/v1", ResourceVersion:"335", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-8k4sx
	I1114 13:49:48.176092       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"63fc70cb-c20d-4678-abaf-fe3b26ca6316", APIVersion:"apps/v1", ResourceVersion:"217", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-n98c2
	I1114 13:49:48.367425       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"370b3dc4-b3b0-49f7-b9aa-c2145e8d7601", APIVersion:"apps/v1", ResourceVersion:"348", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	E1114 13:49:48.428167       1 daemon_controller.go:321] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"679a21da-915c-4cf6-8d67-398ff6e38ff7", ResourceVersion:"234", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63835566573, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\
"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"kindnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20230809-80a64d96\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",
\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40013e8b00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40013e8b20)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40013e8b80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*
int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40013e8c40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI
:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40013e8cc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVol
umeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40013e8ce0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDis
k:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), Sca
leIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20230809-80a64d96", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40013e8d00)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40013e8d40)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.Re
sourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log"
, TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40010b4000), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000a57158), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400058d110), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.P
odDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000e1d0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000a57220)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1114 13:49:48.688001       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"0890a42d-76c8-47c6-bbeb-6e12fd4ce104", APIVersion:"apps/v1", ResourceVersion:"356", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-rx7dp
	I1114 13:49:57.908068       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1114 13:50:06.162797       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"f0b5351b-a056-4980-8005-7a8a6613c50a", APIVersion:"apps/v1", ResourceVersion:"476", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1114 13:50:06.198252       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"9716d439-035f-4b5f-a317-dcc05a62c9c9", APIVersion:"apps/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-9bsww
	I1114 13:50:06.209522       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b31188e0-261d-414f-a0fe-881dfb5f680d", APIVersion:"batch/v1", ResourceVersion:"480", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-wkbjv
	I1114 13:50:06.267765       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"115943da-a4ce-4353-87af-e2738aff5adf", APIVersion:"batch/v1", ResourceVersion:"493", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-9zb8z
	
	* 
	* ==> kube-proxy [3ff47c9dd0749b62e315cd73745025e93b632a3b2359ad311a1319f9c6db623c] <==
	* W1114 13:49:48.897655       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1114 13:49:48.912917       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I1114 13:49:48.912968       1 server_others.go:186] Using iptables Proxier.
	I1114 13:49:48.913297       1 server.go:583] Version: v1.18.20
	I1114 13:49:48.915960       1 config.go:315] Starting service config controller
	I1114 13:49:48.916154       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1114 13:49:48.916422       1 config.go:133] Starting endpoints config controller
	I1114 13:49:48.916459       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1114 13:49:49.016524       1 shared_informer.go:230] Caches are synced for service config 
	I1114 13:49:49.016725       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [92ecb93026e45063bce6707ecf57a8e58efb9cfb6c1adbfefaa07a4540a4f13a] <==
	* I1114 13:49:29.824089       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1114 13:49:29.825291       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1114 13:49:29.825316       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1114 13:49:29.829942       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1114 13:49:29.830692       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 13:49:29.830755       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1114 13:49:29.830810       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I1114 13:49:29.830817       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1114 13:49:29.831690       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1114 13:49:29.831749       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1114 13:49:29.831810       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1114 13:49:29.831864       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 13:49:29.833722       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1114 13:49:29.833824       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1114 13:49:29.833932       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1114 13:49:29.834327       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1114 13:49:30.722291       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1114 13:49:30.799086       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1114 13:49:30.809179       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1114 13:49:30.825155       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1114 13:49:30.870477       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1114 13:49:30.879739       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1114 13:49:31.325458       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E1114 13:49:48.534871       1 factory.go:503] pod: kube-system/coredns-66bff467f8-8k4sx is already present in the active queue
	E1114 13:49:48.634646       1 factory.go:503] pod: kube-system/coredns-66bff467f8-rx7dp is already present in the active queue
	
	* 
	* ==> kubelet <==
	* Nov 14 13:56:56 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:56:56.036691    1633 kuberuntime_image.go:50] Pull image "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7" failed: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 14 13:56:56 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:56:56.036758    1633 kuberuntime_manager.go:818] container start failed: ErrImagePull: rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	Nov 14 13:56:56 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:56:56.036793    1633 pod_workers.go:191] Error syncing pod a7b4d5e3-0c04-4f31-85da-0fc14bc4a673 ("ingress-nginx-admission-patch-9zb8z_ingress-nginx(a7b4d5e3-0c04-4f31-85da-0fc14bc4a673)"), skipping: failed to "StartContainer" for "patch" with ErrImagePull: "rpc error: code = Unknown desc = reading manifest sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7 in docker.io/jettech/kube-webhook-certgen: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit"
	Nov 14 13:56:57 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:56:57.239252    1633 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 14 13:56:57 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:56:57.239306    1633 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 14 13:56:57 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:56:57.239358    1633 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 14 13:56:57 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:56:57.239390    1633 pod_workers.go:191] Error syncing pod 2e09f57a-f5b2-4e32-b886-267b7201b477 ("kube-ingress-dns-minikube_kube-system(2e09f57a-f5b2-4e32-b886-267b7201b477)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 14 13:57:06 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:57:06.239690    1633 pod_workers.go:191] Error syncing pod 8e74885b-9f59-40e3-bc05-1cb032cff8f3 ("ingress-nginx-admission-create-wkbjv_ingress-nginx(8e74885b-9f59-40e3-bc05-1cb032cff8f3)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 14 13:57:07 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:57:07.239406    1633 pod_workers.go:191] Error syncing pod a7b4d5e3-0c04-4f31-85da-0fc14bc4a673 ("ingress-nginx-admission-patch-9zb8z_ingress-nginx(a7b4d5e3-0c04-4f31-85da-0fc14bc4a673)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 14 13:57:10 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:57:10.239658    1633 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 14 13:57:10 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:57:10.239702    1633 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 14 13:57:10 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:57:10.239744    1633 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 14 13:57:10 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:57:10.239796    1633 pod_workers.go:191] Error syncing pod 2e09f57a-f5b2-4e32-b886-267b7201b477 ("kube-ingress-dns-minikube_kube-system(2e09f57a-f5b2-4e32-b886-267b7201b477)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 14 13:57:18 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:57:18.239859    1633 pod_workers.go:191] Error syncing pod 8e74885b-9f59-40e3-bc05-1cb032cff8f3 ("ingress-nginx-admission-create-wkbjv_ingress-nginx(8e74885b-9f59-40e3-bc05-1cb032cff8f3)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 14 13:57:22 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:57:22.239282    1633 pod_workers.go:191] Error syncing pod a7b4d5e3-0c04-4f31-85da-0fc14bc4a673 ("ingress-nginx-admission-patch-9zb8z_ingress-nginx(a7b4d5e3-0c04-4f31-85da-0fc14bc4a673)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 14 13:57:23 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:57:23.239091    1633 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 14 13:57:23 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:57:23.239148    1633 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 14 13:57:23 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:57:23.239200    1633 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 14 13:57:23 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:57:23.239233    1633 pod_workers.go:191] Error syncing pod 2e09f57a-f5b2-4e32-b886-267b7201b477 ("kube-ingress-dns-minikube_kube-system(2e09f57a-f5b2-4e32-b886-267b7201b477)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Nov 14 13:57:29 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:57:29.239234    1633 pod_workers.go:191] Error syncing pod 8e74885b-9f59-40e3-bc05-1cb032cff8f3 ("ingress-nginx-admission-create-wkbjv_ingress-nginx(8e74885b-9f59-40e3-bc05-1cb032cff8f3)"), skipping: failed to "StartContainer" for "create" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 14 13:57:33 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:57:33.239340    1633 pod_workers.go:191] Error syncing pod a7b4d5e3-0c04-4f31-85da-0fc14bc4a673 ("ingress-nginx-admission-patch-9zb8z_ingress-nginx(a7b4d5e3-0c04-4f31-85da-0fc14bc4a673)"), skipping: failed to "StartContainer" for "patch" with ImagePullBackOff: "Back-off pulling image \"docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7\""
	Nov 14 13:57:38 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:57:38.239417    1633 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 14 13:57:38 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:57:38.239463    1633 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 14 13:57:38 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:57:38.239512    1633 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Nov 14 13:57:38 ingress-addon-legacy-814110 kubelet[1633]: E1114 13:57:38.239549    1633 pod_workers.go:191] Error syncing pod 2e09f57a-f5b2-4e32-b886-267b7201b477 ("kube-ingress-dns-minikube_kube-system(2e09f57a-f5b2-4e32-b886-267b7201b477)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	
	* 
	* ==> storage-provisioner [2cf9960ed4483ad88f1ec2b9a17f53e101cd65dcbc0dc50339d31b26821cb572] <==
	* I1114 13:49:58.723518       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1114 13:49:58.735493       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1114 13:49:58.735582       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1114 13:49:58.744035       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1114 13:49:58.744744       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7db563cf-348e-49ef-b918-ab8cd4b5c9ea", APIVersion:"v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-814110_14709fb5-1c67-4b61-aff7-1b16d62de5fe became leader
	I1114 13:49:58.745091       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-814110_14709fb5-1c67-4b61-aff7-1b16d62de5fe!
	I1114 13:49:58.846155       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-814110_14709fb5-1c67-4b61-aff7-1b16d62de5fe!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-814110 -n ingress-addon-legacy-814110
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-814110 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-wkbjv ingress-nginx-admission-patch-9zb8z ingress-nginx-controller-7fcf777cb7-9bsww kube-ingress-dns-minikube
helpers_test.go:274: ======> post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ingress-addon-legacy-814110 describe pod ingress-nginx-admission-create-wkbjv ingress-nginx-admission-patch-9zb8z ingress-nginx-controller-7fcf777cb7-9bsww kube-ingress-dns-minikube
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context ingress-addon-legacy-814110 describe pod ingress-nginx-admission-create-wkbjv ingress-nginx-admission-patch-9zb8z ingress-nginx-controller-7fcf777cb7-9bsww kube-ingress-dns-minikube: exit status 1 (95.090323ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-wkbjv" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9zb8z" not found
	Error from server (NotFound): pods "ingress-nginx-controller-7fcf777cb7-9bsww" not found
	Error from server (NotFound): pods "kube-ingress-dns-minikube" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context ingress-addon-legacy-814110 describe pod ingress-nginx-admission-create-wkbjv ingress-nginx-admission-patch-9zb8z ingress-nginx-controller-7fcf777cb7-9bsww kube-ingress-dns-minikube: exit status 1
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (92.53s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-683928 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-683928 -- exec busybox-5bc68d56bd-rl6d4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-683928 -- exec busybox-5bc68d56bd-rl6d4 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-683928 -- exec busybox-5bc68d56bd-rl6d4 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (256.966566ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-rl6d4): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-683928 -- exec busybox-5bc68d56bd-vf6zm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-683928 -- exec busybox-5bc68d56bd-vf6zm -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-arm64 kubectl -p multinode-683928 -- exec busybox-5bc68d56bd-vf6zm -- sh -c "ping -c 1 192.168.58.1": exit status 1 (271.647287ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-vf6zm): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-683928
helpers_test.go:235: (dbg) docker inspect multinode-683928:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "95780648ef67ea835cd8638bb1ad39dc71166d07c9ffffe13531b9d9cc13b597",
	        "Created": "2023-11-14T14:03:45.377565027Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1256229,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-14T14:03:45.712316389Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:977f9df3a3e2dccc16de7b5e8115e5e1294a98b99d56135cce7538135e7a7a9d",
	        "ResolvConfPath": "/var/lib/docker/containers/95780648ef67ea835cd8638bb1ad39dc71166d07c9ffffe13531b9d9cc13b597/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/95780648ef67ea835cd8638bb1ad39dc71166d07c9ffffe13531b9d9cc13b597/hostname",
	        "HostsPath": "/var/lib/docker/containers/95780648ef67ea835cd8638bb1ad39dc71166d07c9ffffe13531b9d9cc13b597/hosts",
	        "LogPath": "/var/lib/docker/containers/95780648ef67ea835cd8638bb1ad39dc71166d07c9ffffe13531b9d9cc13b597/95780648ef67ea835cd8638bb1ad39dc71166d07c9ffffe13531b9d9cc13b597-json.log",
	        "Name": "/multinode-683928",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-683928:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-683928",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/5809e6c909aaa34f3e0d270a27151f1d2b5b3d6d243d9abd5086be4ba5c0276e-init/diff:/var/lib/docker/overlay2/ad9b1528ccc99a2a23c8205d781cfd6ce01aa0662a87aad99178910b13bfc77f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5809e6c909aaa34f3e0d270a27151f1d2b5b3d6d243d9abd5086be4ba5c0276e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5809e6c909aaa34f3e0d270a27151f1d2b5b3d6d243d9abd5086be4ba5c0276e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5809e6c909aaa34f3e0d270a27151f1d2b5b3d6d243d9abd5086be4ba5c0276e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-683928",
	                "Source": "/var/lib/docker/volumes/multinode-683928/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-683928",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-683928",
	                "name.minikube.sigs.k8s.io": "multinode-683928",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "77aaa32cf093e0b4e718524074a93d4f862c590e2bb7bd127b566413e4eed4eb",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34354"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34353"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34350"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34352"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34351"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/77aaa32cf093",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-683928": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "95780648ef67",
	                        "multinode-683928"
	                    ],
	                    "NetworkID": "a4b8be742be955647927d7bbec0f9a1f9a15c0fd0863631fd1136edd54436641",
	                    "EndpointID": "ed5c5be8fa7dac1dd63a18f3783ec2e8993d2ece341ff920e77b5b57e0380ed5",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p multinode-683928 -n multinode-683928
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p multinode-683928 logs -n 25: (1.571014738s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-559954                           | mount-start-2-559954 | jenkins | v1.32.0 | 14 Nov 23 14:03 UTC | 14 Nov 23 14:03 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-559954 ssh -- ls                    | mount-start-2-559954 | jenkins | v1.32.0 | 14 Nov 23 14:03 UTC | 14 Nov 23 14:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-557960                           | mount-start-1-557960 | jenkins | v1.32.0 | 14 Nov 23 14:03 UTC | 14 Nov 23 14:03 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-559954 ssh -- ls                    | mount-start-2-559954 | jenkins | v1.32.0 | 14 Nov 23 14:03 UTC | 14 Nov 23 14:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-559954                           | mount-start-2-559954 | jenkins | v1.32.0 | 14 Nov 23 14:03 UTC | 14 Nov 23 14:03 UTC |
	| start   | -p mount-start-2-559954                           | mount-start-2-559954 | jenkins | v1.32.0 | 14 Nov 23 14:03 UTC | 14 Nov 23 14:03 UTC |
	| ssh     | mount-start-2-559954 ssh -- ls                    | mount-start-2-559954 | jenkins | v1.32.0 | 14 Nov 23 14:03 UTC | 14 Nov 23 14:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-559954                           | mount-start-2-559954 | jenkins | v1.32.0 | 14 Nov 23 14:03 UTC | 14 Nov 23 14:03 UTC |
	| delete  | -p mount-start-1-557960                           | mount-start-1-557960 | jenkins | v1.32.0 | 14 Nov 23 14:03 UTC | 14 Nov 23 14:03 UTC |
	| start   | -p multinode-683928                               | multinode-683928     | jenkins | v1.32.0 | 14 Nov 23 14:03 UTC | 14 Nov 23 14:05 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-683928 -- apply -f                   | multinode-683928     | jenkins | v1.32.0 | 14 Nov 23 14:05 UTC | 14 Nov 23 14:05 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-683928 -- rollout                    | multinode-683928     | jenkins | v1.32.0 | 14 Nov 23 14:05 UTC | 14 Nov 23 14:05 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-683928 -- get pods -o                | multinode-683928     | jenkins | v1.32.0 | 14 Nov 23 14:05 UTC | 14 Nov 23 14:05 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-683928 -- get pods -o                | multinode-683928     | jenkins | v1.32.0 | 14 Nov 23 14:05 UTC | 14 Nov 23 14:05 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-683928 -- exec                       | multinode-683928     | jenkins | v1.32.0 | 14 Nov 23 14:05 UTC | 14 Nov 23 14:05 UTC |
	|         | busybox-5bc68d56bd-rl6d4 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-683928 -- exec                       | multinode-683928     | jenkins | v1.32.0 | 14 Nov 23 14:05 UTC | 14 Nov 23 14:05 UTC |
	|         | busybox-5bc68d56bd-vf6zm --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-683928 -- exec                       | multinode-683928     | jenkins | v1.32.0 | 14 Nov 23 14:05 UTC | 14 Nov 23 14:05 UTC |
	|         | busybox-5bc68d56bd-rl6d4 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-683928 -- exec                       | multinode-683928     | jenkins | v1.32.0 | 14 Nov 23 14:05 UTC | 14 Nov 23 14:05 UTC |
	|         | busybox-5bc68d56bd-vf6zm --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-683928 -- exec                       | multinode-683928     | jenkins | v1.32.0 | 14 Nov 23 14:05 UTC | 14 Nov 23 14:05 UTC |
	|         | busybox-5bc68d56bd-rl6d4 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-683928 -- exec                       | multinode-683928     | jenkins | v1.32.0 | 14 Nov 23 14:05 UTC | 14 Nov 23 14:05 UTC |
	|         | busybox-5bc68d56bd-vf6zm -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-683928 -- get pods -o                | multinode-683928     | jenkins | v1.32.0 | 14 Nov 23 14:05 UTC | 14 Nov 23 14:05 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-683928 -- exec                       | multinode-683928     | jenkins | v1.32.0 | 14 Nov 23 14:05 UTC | 14 Nov 23 14:05 UTC |
	|         | busybox-5bc68d56bd-rl6d4                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-683928 -- exec                       | multinode-683928     | jenkins | v1.32.0 | 14 Nov 23 14:05 UTC |                     |
	|         | busybox-5bc68d56bd-rl6d4 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-683928 -- exec                       | multinode-683928     | jenkins | v1.32.0 | 14 Nov 23 14:05 UTC | 14 Nov 23 14:05 UTC |
	|         | busybox-5bc68d56bd-vf6zm                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-683928 -- exec                       | multinode-683928     | jenkins | v1.32.0 | 14 Nov 23 14:05 UTC |                     |
	|         | busybox-5bc68d56bd-vf6zm -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 14:03:39
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 14:03:39.774541 1255771 out.go:296] Setting OutFile to fd 1 ...
	I1114 14:03:39.774683 1255771 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:03:39.774693 1255771 out.go:309] Setting ErrFile to fd 2...
	I1114 14:03:39.774699 1255771 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:03:39.774957 1255771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
	I1114 14:03:39.775354 1255771 out.go:303] Setting JSON to false
	I1114 14:03:39.776371 1255771 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":38766,"bootTime":1699931854,"procs":355,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1114 14:03:39.776446 1255771 start.go:138] virtualization:  
	I1114 14:03:39.779075 1255771 out.go:177] * [multinode-683928] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1114 14:03:39.781627 1255771 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 14:03:39.783483 1255771 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 14:03:39.781778 1255771 notify.go:220] Checking for updates...
	I1114 14:03:39.787118 1255771 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 14:03:39.788976 1255771 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	I1114 14:03:39.790703 1255771 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1114 14:03:39.792591 1255771 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 14:03:39.794667 1255771 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 14:03:39.818551 1255771 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1114 14:03:39.818664 1255771 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 14:03:39.901416 1255771 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-14 14:03:39.890803705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 14:03:39.901538 1255771 docker.go:295] overlay module found
	I1114 14:03:39.903838 1255771 out.go:177] * Using the docker driver based on user configuration
	I1114 14:03:39.905932 1255771 start.go:298] selected driver: docker
	I1114 14:03:39.905950 1255771 start.go:902] validating driver "docker" against <nil>
	I1114 14:03:39.905965 1255771 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 14:03:39.906603 1255771 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 14:03:39.975714 1255771 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:35 SystemTime:2023-11-14 14:03:39.966299915 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 14:03:39.975883 1255771 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1114 14:03:39.976122 1255771 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1114 14:03:39.978099 1255771 out.go:177] * Using Docker driver with root privileges
	I1114 14:03:39.980417 1255771 cni.go:84] Creating CNI manager for ""
	I1114 14:03:39.980438 1255771 cni.go:136] 0 nodes found, recommending kindnet
	I1114 14:03:39.980450 1255771 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1114 14:03:39.980468 1255771 start_flags.go:323] config:
	{Name:multinode-683928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-683928 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 14:03:39.983338 1255771 out.go:177] * Starting control plane node multinode-683928 in cluster multinode-683928
	I1114 14:03:39.985746 1255771 cache.go:121] Beginning downloading kic base image for docker with crio
	I1114 14:03:39.987934 1255771 out.go:177] * Pulling base image ...
	I1114 14:03:39.989969 1255771 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 14:03:39.990020 1255771 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1114 14:03:39.990033 1255771 cache.go:56] Caching tarball of preloaded images
	I1114 14:03:39.990077 1255771 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon
	I1114 14:03:39.990117 1255771 preload.go:174] Found /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1114 14:03:39.990127 1255771 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 14:03:39.990490 1255771 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/config.json ...
	I1114 14:03:39.990515 1255771 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/config.json: {Name:mk962c142998a126daaaa75936a2af6d18fbd4db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:03:40.011195 1255771 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon, skipping pull
	I1114 14:03:40.011221 1255771 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 exists in daemon, skipping load
	I1114 14:03:40.011247 1255771 cache.go:194] Successfully downloaded all kic artifacts
	I1114 14:03:40.011302 1255771 start.go:365] acquiring machines lock for multinode-683928: {Name:mkb6074c1f7ef6f53429876100706ace81cfc8c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:03:40.011436 1255771 start.go:369] acquired machines lock for "multinode-683928" in 115.191µs
	I1114 14:03:40.011465 1255771 start.go:93] Provisioning new machine with config: &{Name:multinode-683928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-683928 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 14:03:40.011544 1255771 start.go:125] createHost starting for "" (driver="docker")
	I1114 14:03:40.014421 1255771 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1114 14:03:40.014706 1255771 start.go:159] libmachine.API.Create for "multinode-683928" (driver="docker")
	I1114 14:03:40.014763 1255771 client.go:168] LocalClient.Create starting
	I1114 14:03:40.014836 1255771 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem
	I1114 14:03:40.014887 1255771 main.go:141] libmachine: Decoding PEM data...
	I1114 14:03:40.014910 1255771 main.go:141] libmachine: Parsing certificate...
	I1114 14:03:40.014968 1255771 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem
	I1114 14:03:40.014997 1255771 main.go:141] libmachine: Decoding PEM data...
	I1114 14:03:40.015018 1255771 main.go:141] libmachine: Parsing certificate...
	I1114 14:03:40.015392 1255771 cli_runner.go:164] Run: docker network inspect multinode-683928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1114 14:03:40.033171 1255771 cli_runner.go:211] docker network inspect multinode-683928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1114 14:03:40.033262 1255771 network_create.go:281] running [docker network inspect multinode-683928] to gather additional debugging logs...
	I1114 14:03:40.033281 1255771 cli_runner.go:164] Run: docker network inspect multinode-683928
	W1114 14:03:40.053692 1255771 cli_runner.go:211] docker network inspect multinode-683928 returned with exit code 1
	I1114 14:03:40.053724 1255771 network_create.go:284] error running [docker network inspect multinode-683928]: docker network inspect multinode-683928: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-683928 not found
	I1114 14:03:40.053737 1255771 network_create.go:286] output of [docker network inspect multinode-683928]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-683928 not found
	
	** /stderr **
	I1114 14:03:40.053849 1255771 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1114 14:03:40.071526 1255771 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d807bcb05d12 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:da:22:90:37} reservation:<nil>}
	I1114 14:03:40.071898 1255771 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40024eaeb0}
	I1114 14:03:40.071922 1255771 network_create.go:124] attempt to create docker network multinode-683928 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1114 14:03:40.071987 1255771 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-683928 multinode-683928
	I1114 14:03:40.145308 1255771 network_create.go:108] docker network multinode-683928 192.168.58.0/24 created
	I1114 14:03:40.145339 1255771 kic.go:121] calculated static IP "192.168.58.2" for the "multinode-683928" container
	I1114 14:03:40.145415 1255771 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1114 14:03:40.162636 1255771 cli_runner.go:164] Run: docker volume create multinode-683928 --label name.minikube.sigs.k8s.io=multinode-683928 --label created_by.minikube.sigs.k8s.io=true
	I1114 14:03:40.182373 1255771 oci.go:103] Successfully created a docker volume multinode-683928
	I1114 14:03:40.182453 1255771 cli_runner.go:164] Run: docker run --rm --name multinode-683928-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-683928 --entrypoint /usr/bin/test -v multinode-683928:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -d /var/lib
	I1114 14:03:40.769229 1255771 oci.go:107] Successfully prepared a docker volume multinode-683928
	I1114 14:03:40.769293 1255771 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 14:03:40.769313 1255771 kic.go:194] Starting extracting preloaded images to volume ...
	I1114 14:03:40.769413 1255771 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-683928:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir
	I1114 14:03:45.283271 1255771 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-683928:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir: (4.513815224s)
	I1114 14:03:45.283310 1255771 kic.go:203] duration metric: took 4.513992 seconds to extract preloaded images to volume
	W1114 14:03:45.283488 1255771 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1114 14:03:45.283626 1255771 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1114 14:03:45.359004 1255771 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-683928 --name multinode-683928 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-683928 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-683928 --network multinode-683928 --ip 192.168.58.2 --volume multinode-683928:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24
	I1114 14:03:45.721541 1255771 cli_runner.go:164] Run: docker container inspect multinode-683928 --format={{.State.Running}}
	I1114 14:03:45.742257 1255771 cli_runner.go:164] Run: docker container inspect multinode-683928 --format={{.State.Status}}
	I1114 14:03:45.770165 1255771 cli_runner.go:164] Run: docker exec multinode-683928 stat /var/lib/dpkg/alternatives/iptables
	I1114 14:03:45.867921 1255771 oci.go:144] the created container "multinode-683928" has a running status.
	I1114 14:03:45.867958 1255771 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928/id_rsa...
	I1114 14:03:47.385431 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1114 14:03:47.385481 1255771 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1114 14:03:47.410616 1255771 cli_runner.go:164] Run: docker container inspect multinode-683928 --format={{.State.Status}}
	I1114 14:03:47.430243 1255771 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1114 14:03:47.430270 1255771 kic_runner.go:114] Args: [docker exec --privileged multinode-683928 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1114 14:03:47.495216 1255771 cli_runner.go:164] Run: docker container inspect multinode-683928 --format={{.State.Status}}
	I1114 14:03:47.513481 1255771 machine.go:88] provisioning docker machine ...
	I1114 14:03:47.513516 1255771 ubuntu.go:169] provisioning hostname "multinode-683928"
	I1114 14:03:47.513589 1255771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928
	I1114 14:03:47.531063 1255771 main.go:141] libmachine: Using SSH client type: native
	I1114 14:03:47.531494 1255771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1114 14:03:47.531514 1255771 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-683928 && echo "multinode-683928" | sudo tee /etc/hostname
	I1114 14:03:47.687489 1255771 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-683928
	
	I1114 14:03:47.687574 1255771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928
	I1114 14:03:47.705563 1255771 main.go:141] libmachine: Using SSH client type: native
	I1114 14:03:47.705983 1255771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1114 14:03:47.706009 1255771 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-683928' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-683928/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-683928' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 14:03:47.845887 1255771 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 14:03:47.845925 1255771 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17581-1186318/.minikube CaCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17581-1186318/.minikube}
	I1114 14:03:47.845944 1255771 ubuntu.go:177] setting up certificates
	I1114 14:03:47.845966 1255771 provision.go:83] configureAuth start
	I1114 14:03:47.846039 1255771 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-683928
	I1114 14:03:47.864672 1255771 provision.go:138] copyHostCerts
	I1114 14:03:47.864724 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem
	I1114 14:03:47.864755 1255771 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem, removing ...
	I1114 14:03:47.864767 1255771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem
	I1114 14:03:47.864844 1255771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem (1082 bytes)
	I1114 14:03:47.864926 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem
	I1114 14:03:47.864949 1255771 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem, removing ...
	I1114 14:03:47.864958 1255771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem
	I1114 14:03:47.864986 1255771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem (1123 bytes)
	I1114 14:03:47.865032 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem
	I1114 14:03:47.865051 1255771 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem, removing ...
	I1114 14:03:47.865058 1255771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem
	I1114 14:03:47.865081 1255771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem (1675 bytes)
	I1114 14:03:47.865129 1255771 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem org=jenkins.multinode-683928 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-683928]
	I1114 14:03:48.709992 1255771 provision.go:172] copyRemoteCerts
	I1114 14:03:48.710066 1255771 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 14:03:48.710111 1255771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928
	I1114 14:03:48.728722 1255771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928/id_rsa Username:docker}
	I1114 14:03:48.831598 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1114 14:03:48.831661 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 14:03:48.862715 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1114 14:03:48.862774 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1114 14:03:48.891102 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1114 14:03:48.891177 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 14:03:48.919756 1255771 provision.go:86] duration metric: configureAuth took 1.073775025s
	I1114 14:03:48.919784 1255771 ubuntu.go:193] setting minikube options for container-runtime
	I1114 14:03:48.919980 1255771 config.go:182] Loaded profile config "multinode-683928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 14:03:48.920103 1255771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928
	I1114 14:03:48.938099 1255771 main.go:141] libmachine: Using SSH client type: native
	I1114 14:03:48.938550 1255771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34354 <nil> <nil>}
	I1114 14:03:48.938574 1255771 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 14:03:49.191341 1255771 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 14:03:49.191364 1255771 machine.go:91] provisioned docker machine in 1.677857902s
	I1114 14:03:49.191373 1255771 client.go:171] LocalClient.Create took 9.176601127s
	I1114 14:03:49.191386 1255771 start.go:167] duration metric: libmachine.API.Create for "multinode-683928" took 9.176682858s
	I1114 14:03:49.191394 1255771 start.go:300] post-start starting for "multinode-683928" (driver="docker")
	I1114 14:03:49.191403 1255771 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 14:03:49.191461 1255771 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 14:03:49.191520 1255771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928
	I1114 14:03:49.210783 1255771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928/id_rsa Username:docker}
	I1114 14:03:49.311671 1255771 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 14:03:49.315778 1255771 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1114 14:03:49.315799 1255771 command_runner.go:130] > NAME="Ubuntu"
	I1114 14:03:49.315806 1255771 command_runner.go:130] > VERSION_ID="22.04"
	I1114 14:03:49.315812 1255771 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1114 14:03:49.315818 1255771 command_runner.go:130] > VERSION_CODENAME=jammy
	I1114 14:03:49.315822 1255771 command_runner.go:130] > ID=ubuntu
	I1114 14:03:49.315827 1255771 command_runner.go:130] > ID_LIKE=debian
	I1114 14:03:49.315833 1255771 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1114 14:03:49.315847 1255771 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1114 14:03:49.315861 1255771 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1114 14:03:49.315870 1255771 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1114 14:03:49.315875 1255771 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1114 14:03:49.315918 1255771 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1114 14:03:49.315941 1255771 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1114 14:03:49.315951 1255771 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1114 14:03:49.315959 1255771 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1114 14:03:49.315969 1255771 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1186318/.minikube/addons for local assets ...
	I1114 14:03:49.316030 1255771 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1186318/.minikube/files for local assets ...
	I1114 14:03:49.316108 1255771 filesync.go:149] local asset: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem -> 11916902.pem in /etc/ssl/certs
	I1114 14:03:49.316115 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem -> /etc/ssl/certs/11916902.pem
	I1114 14:03:49.316223 1255771 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 14:03:49.326511 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem --> /etc/ssl/certs/11916902.pem (1708 bytes)
	I1114 14:03:49.354773 1255771 start.go:303] post-start completed in 163.36516ms
	I1114 14:03:49.355142 1255771 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-683928
	I1114 14:03:49.375313 1255771 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/config.json ...
	I1114 14:03:49.375589 1255771 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 14:03:49.375648 1255771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928
	I1114 14:03:49.393695 1255771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928/id_rsa Username:docker}
	I1114 14:03:49.494541 1255771 command_runner.go:130] > 11%!
	(MISSING)I1114 14:03:49.494631 1255771 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1114 14:03:49.500237 1255771 command_runner.go:130] > 173G
	I1114 14:03:49.500799 1255771 start.go:128] duration metric: createHost completed in 9.48924383s
	I1114 14:03:49.500819 1255771 start.go:83] releasing machines lock for "multinode-683928", held for 9.489373569s
	I1114 14:03:49.500892 1255771 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-683928
	I1114 14:03:49.518364 1255771 ssh_runner.go:195] Run: cat /version.json
	I1114 14:03:49.518417 1255771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928
	I1114 14:03:49.518435 1255771 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 14:03:49.518497 1255771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928
	I1114 14:03:49.543654 1255771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928/id_rsa Username:docker}
	I1114 14:03:49.560817 1255771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928/id_rsa Username:docker}
	I1114 14:03:49.788357 1255771 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1114 14:03:49.788399 1255771 command_runner.go:130] > {"iso_version": "v1.32.1", "kicbase_version": "v0.0.42-1699485386-17565", "minikube_version": "v1.32.0", "commit": "ac8620e02dd92b447e2556d107d7751e3faf21d2"}
	I1114 14:03:49.788513 1255771 ssh_runner.go:195] Run: systemctl --version
	I1114 14:03:49.793841 1255771 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.11)
	I1114 14:03:49.793880 1255771 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1114 14:03:49.794181 1255771 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 14:03:49.942150 1255771 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1114 14:03:49.948530 1255771 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1114 14:03:49.948583 1255771 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1114 14:03:49.948591 1255771 command_runner.go:130] > Device: 3ah/58d	Inode: 1571320     Links: 1
	I1114 14:03:49.948599 1255771 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1114 14:03:49.948605 1255771 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1114 14:03:49.948611 1255771 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1114 14:03:49.948618 1255771 command_runner.go:130] > Change: 2023-11-14 13:34:26.425737793 +0000
	I1114 14:03:49.948632 1255771 command_runner.go:130] >  Birth: 2023-11-14 13:34:26.425737793 +0000
	I1114 14:03:49.949107 1255771 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 14:03:49.977743 1255771 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1114 14:03:49.977831 1255771 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 14:03:50.021331 1255771 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1114 14:03:50.021393 1255771 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1114 14:03:50.021407 1255771 start.go:472] detecting cgroup driver to use...
	I1114 14:03:50.021442 1255771 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1114 14:03:50.021498 1255771 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 14:03:50.041421 1255771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 14:03:50.055426 1255771 docker.go:203] disabling cri-docker service (if available) ...
	I1114 14:03:50.055493 1255771 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 14:03:50.071946 1255771 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 14:03:50.090221 1255771 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 14:03:50.186537 1255771 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 14:03:50.284415 1255771 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1114 14:03:50.284444 1255771 docker.go:219] disabling docker service ...
	I1114 14:03:50.284498 1255771 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 14:03:50.306508 1255771 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 14:03:50.321066 1255771 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 14:03:50.419916 1255771 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1114 14:03:50.420072 1255771 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 14:03:50.529944 1255771 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1114 14:03:50.530036 1255771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 14:03:50.543619 1255771 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 14:03:50.561856 1255771 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1114 14:03:50.563276 1255771 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 14:03:50.563370 1255771 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:03:50.575126 1255771 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 14:03:50.575288 1255771 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:03:50.587565 1255771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:03:50.599871 1255771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:03:50.611700 1255771 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 14:03:50.623238 1255771 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 14:03:50.632425 1255771 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1114 14:03:50.633647 1255771 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 14:03:50.643988 1255771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 14:03:50.738819 1255771 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 14:03:50.861986 1255771 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 14:03:50.862088 1255771 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 14:03:50.867206 1255771 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1114 14:03:50.867232 1255771 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1114 14:03:50.867240 1255771 command_runner.go:130] > Device: 44h/68d	Inode: 190         Links: 1
	I1114 14:03:50.867249 1255771 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1114 14:03:50.867255 1255771 command_runner.go:130] > Access: 2023-11-14 14:03:50.846579798 +0000
	I1114 14:03:50.867262 1255771 command_runner.go:130] > Modify: 2023-11-14 14:03:50.846579798 +0000
	I1114 14:03:50.867271 1255771 command_runner.go:130] > Change: 2023-11-14 14:03:50.846579798 +0000
	I1114 14:03:50.867276 1255771 command_runner.go:130] >  Birth: -
	I1114 14:03:50.867718 1255771 start.go:540] Will wait 60s for crictl version
	I1114 14:03:50.867776 1255771 ssh_runner.go:195] Run: which crictl
	I1114 14:03:50.871875 1255771 command_runner.go:130] > /usr/bin/crictl
	I1114 14:03:50.872295 1255771 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 14:03:50.925850 1255771 command_runner.go:130] > Version:  0.1.0
	I1114 14:03:50.925873 1255771 command_runner.go:130] > RuntimeName:  cri-o
	I1114 14:03:50.925881 1255771 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1114 14:03:50.925888 1255771 command_runner.go:130] > RuntimeApiVersion:  v1
	I1114 14:03:50.928462 1255771 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1114 14:03:50.928564 1255771 ssh_runner.go:195] Run: crio --version
	I1114 14:03:50.974463 1255771 command_runner.go:130] > crio version 1.24.6
	I1114 14:03:50.974486 1255771 command_runner.go:130] > Version:          1.24.6
	I1114 14:03:50.974497 1255771 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1114 14:03:50.974503 1255771 command_runner.go:130] > GitTreeState:     clean
	I1114 14:03:50.974510 1255771 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1114 14:03:50.974515 1255771 command_runner.go:130] > GoVersion:        go1.18.2
	I1114 14:03:50.974520 1255771 command_runner.go:130] > Compiler:         gc
	I1114 14:03:50.974526 1255771 command_runner.go:130] > Platform:         linux/arm64
	I1114 14:03:50.974534 1255771 command_runner.go:130] > Linkmode:         dynamic
	I1114 14:03:50.974544 1255771 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1114 14:03:50.974553 1255771 command_runner.go:130] > SeccompEnabled:   true
	I1114 14:03:50.974558 1255771 command_runner.go:130] > AppArmorEnabled:  false
	I1114 14:03:50.976504 1255771 ssh_runner.go:195] Run: crio --version
	I1114 14:03:51.022990 1255771 command_runner.go:130] > crio version 1.24.6
	I1114 14:03:51.023013 1255771 command_runner.go:130] > Version:          1.24.6
	I1114 14:03:51.023023 1255771 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1114 14:03:51.023028 1255771 command_runner.go:130] > GitTreeState:     clean
	I1114 14:03:51.023035 1255771 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1114 14:03:51.023041 1255771 command_runner.go:130] > GoVersion:        go1.18.2
	I1114 14:03:51.023046 1255771 command_runner.go:130] > Compiler:         gc
	I1114 14:03:51.023053 1255771 command_runner.go:130] > Platform:         linux/arm64
	I1114 14:03:51.023068 1255771 command_runner.go:130] > Linkmode:         dynamic
	I1114 14:03:51.023091 1255771 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1114 14:03:51.023102 1255771 command_runner.go:130] > SeccompEnabled:   true
	I1114 14:03:51.023108 1255771 command_runner.go:130] > AppArmorEnabled:  false
	I1114 14:03:51.026885 1255771 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1114 14:03:51.028592 1255771 cli_runner.go:164] Run: docker network inspect multinode-683928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1114 14:03:51.046532 1255771 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1114 14:03:51.051259 1255771 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 14:03:51.065590 1255771 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 14:03:51.065661 1255771 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 14:03:51.129696 1255771 command_runner.go:130] > {
	I1114 14:03:51.129716 1255771 command_runner.go:130] >   "images": [
	I1114 14:03:51.129730 1255771 command_runner.go:130] >     {
	I1114 14:03:51.129741 1255771 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1114 14:03:51.129746 1255771 command_runner.go:130] >       "repoTags": [
	I1114 14:03:51.129753 1255771 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1114 14:03:51.129758 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.129766 1255771 command_runner.go:130] >       "repoDigests": [
	I1114 14:03:51.129777 1255771 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1114 14:03:51.129786 1255771 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1114 14:03:51.129791 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.129796 1255771 command_runner.go:130] >       "size": "60867618",
	I1114 14:03:51.129801 1255771 command_runner.go:130] >       "uid": null,
	I1114 14:03:51.129806 1255771 command_runner.go:130] >       "username": "",
	I1114 14:03:51.129814 1255771 command_runner.go:130] >       "spec": null,
	I1114 14:03:51.129819 1255771 command_runner.go:130] >       "pinned": false
	I1114 14:03:51.129823 1255771 command_runner.go:130] >     },
	I1114 14:03:51.129828 1255771 command_runner.go:130] >     {
	I1114 14:03:51.129835 1255771 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1114 14:03:51.129840 1255771 command_runner.go:130] >       "repoTags": [
	I1114 14:03:51.129847 1255771 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1114 14:03:51.129851 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.129857 1255771 command_runner.go:130] >       "repoDigests": [
	I1114 14:03:51.129866 1255771 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1114 14:03:51.129876 1255771 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1114 14:03:51.129882 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.129890 1255771 command_runner.go:130] >       "size": "29037500",
	I1114 14:03:51.129895 1255771 command_runner.go:130] >       "uid": null,
	I1114 14:03:51.129900 1255771 command_runner.go:130] >       "username": "",
	I1114 14:03:51.129906 1255771 command_runner.go:130] >       "spec": null,
	I1114 14:03:51.129912 1255771 command_runner.go:130] >       "pinned": false
	I1114 14:03:51.129916 1255771 command_runner.go:130] >     },
	I1114 14:03:51.129920 1255771 command_runner.go:130] >     {
	I1114 14:03:51.129928 1255771 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1114 14:03:51.129933 1255771 command_runner.go:130] >       "repoTags": [
	I1114 14:03:51.129940 1255771 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1114 14:03:51.129944 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.129949 1255771 command_runner.go:130] >       "repoDigests": [
	I1114 14:03:51.129958 1255771 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1114 14:03:51.129968 1255771 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1114 14:03:51.129973 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.129978 1255771 command_runner.go:130] >       "size": "51393451",
	I1114 14:03:51.129982 1255771 command_runner.go:130] >       "uid": null,
	I1114 14:03:51.129989 1255771 command_runner.go:130] >       "username": "",
	I1114 14:03:51.129995 1255771 command_runner.go:130] >       "spec": null,
	I1114 14:03:51.130002 1255771 command_runner.go:130] >       "pinned": false
	I1114 14:03:51.130006 1255771 command_runner.go:130] >     },
	I1114 14:03:51.130010 1255771 command_runner.go:130] >     {
	I1114 14:03:51.130018 1255771 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1114 14:03:51.130023 1255771 command_runner.go:130] >       "repoTags": [
	I1114 14:03:51.130029 1255771 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1114 14:03:51.130033 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.130038 1255771 command_runner.go:130] >       "repoDigests": [
	I1114 14:03:51.130047 1255771 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1114 14:03:51.130056 1255771 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1114 14:03:51.130074 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.130080 1255771 command_runner.go:130] >       "size": "182203183",
	I1114 14:03:51.130085 1255771 command_runner.go:130] >       "uid": {
	I1114 14:03:51.130090 1255771 command_runner.go:130] >         "value": "0"
	I1114 14:03:51.130094 1255771 command_runner.go:130] >       },
	I1114 14:03:51.130100 1255771 command_runner.go:130] >       "username": "",
	I1114 14:03:51.130106 1255771 command_runner.go:130] >       "spec": null,
	I1114 14:03:51.130113 1255771 command_runner.go:130] >       "pinned": false
	I1114 14:03:51.130117 1255771 command_runner.go:130] >     },
	I1114 14:03:51.130122 1255771 command_runner.go:130] >     {
	I1114 14:03:51.130130 1255771 command_runner.go:130] >       "id": "537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7",
	I1114 14:03:51.130135 1255771 command_runner.go:130] >       "repoTags": [
	I1114 14:03:51.130141 1255771 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1114 14:03:51.130145 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.130150 1255771 command_runner.go:130] >       "repoDigests": [
	I1114 14:03:51.130159 1255771 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7055e7e0041a953d3fcec5950b88e8608ce09489f775dc0a8bd62a3300fd3ffa",
	I1114 14:03:51.130168 1255771 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1114 14:03:51.130173 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.130178 1255771 command_runner.go:130] >       "size": "121054158",
	I1114 14:03:51.130185 1255771 command_runner.go:130] >       "uid": {
	I1114 14:03:51.130190 1255771 command_runner.go:130] >         "value": "0"
	I1114 14:03:51.130195 1255771 command_runner.go:130] >       },
	I1114 14:03:51.130200 1255771 command_runner.go:130] >       "username": "",
	I1114 14:03:51.130206 1255771 command_runner.go:130] >       "spec": null,
	I1114 14:03:51.130214 1255771 command_runner.go:130] >       "pinned": false
	I1114 14:03:51.130219 1255771 command_runner.go:130] >     },
	I1114 14:03:51.130223 1255771 command_runner.go:130] >     {
	I1114 14:03:51.130231 1255771 command_runner.go:130] >       "id": "8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16",
	I1114 14:03:51.130235 1255771 command_runner.go:130] >       "repoTags": [
	I1114 14:03:51.130242 1255771 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1114 14:03:51.130246 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.130251 1255771 command_runner.go:130] >       "repoDigests": [
	I1114 14:03:51.130261 1255771 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1114 14:03:51.130271 1255771 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c53671810fed4fd98b482a8e32f105585826221a4657ebd6181bc20becd3f0be"
	I1114 14:03:51.130275 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.130282 1255771 command_runner.go:130] >       "size": "117252916",
	I1114 14:03:51.130287 1255771 command_runner.go:130] >       "uid": {
	I1114 14:03:51.130291 1255771 command_runner.go:130] >         "value": "0"
	I1114 14:03:51.130296 1255771 command_runner.go:130] >       },
	I1114 14:03:51.130301 1255771 command_runner.go:130] >       "username": "",
	I1114 14:03:51.130305 1255771 command_runner.go:130] >       "spec": null,
	I1114 14:03:51.130312 1255771 command_runner.go:130] >       "pinned": false
	I1114 14:03:51.130318 1255771 command_runner.go:130] >     },
	I1114 14:03:51.130322 1255771 command_runner.go:130] >     {
	I1114 14:03:51.130330 1255771 command_runner.go:130] >       "id": "a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd",
	I1114 14:03:51.130335 1255771 command_runner.go:130] >       "repoTags": [
	I1114 14:03:51.130341 1255771 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1114 14:03:51.130345 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.130351 1255771 command_runner.go:130] >       "repoDigests": [
	I1114 14:03:51.130359 1255771 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:0228eb00239c0ea5f627a6191fc192f4e20606b57419ce9e2e0c1588f960b483",
	I1114 14:03:51.130368 1255771 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1114 14:03:51.130373 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.130377 1255771 command_runner.go:130] >       "size": "69926807",
	I1114 14:03:51.130383 1255771 command_runner.go:130] >       "uid": null,
	I1114 14:03:51.130388 1255771 command_runner.go:130] >       "username": "",
	I1114 14:03:51.130394 1255771 command_runner.go:130] >       "spec": null,
	I1114 14:03:51.130399 1255771 command_runner.go:130] >       "pinned": false
	I1114 14:03:51.130403 1255771 command_runner.go:130] >     },
	I1114 14:03:51.130407 1255771 command_runner.go:130] >     {
	I1114 14:03:51.130414 1255771 command_runner.go:130] >       "id": "42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314",
	I1114 14:03:51.130422 1255771 command_runner.go:130] >       "repoTags": [
	I1114 14:03:51.130428 1255771 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1114 14:03:51.130432 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.130437 1255771 command_runner.go:130] >       "repoDigests": [
	I1114 14:03:51.130486 1255771 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1114 14:03:51.130497 1255771 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c0c5cdf040306fccc833bfa847f74be0f6ea5c828ba6c2a443210f68aa9bdd7c"
	I1114 14:03:51.130501 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.130517 1255771 command_runner.go:130] >       "size": "59188020",
	I1114 14:03:51.130521 1255771 command_runner.go:130] >       "uid": {
	I1114 14:03:51.130528 1255771 command_runner.go:130] >         "value": "0"
	I1114 14:03:51.130538 1255771 command_runner.go:130] >       },
	I1114 14:03:51.130543 1255771 command_runner.go:130] >       "username": "",
	I1114 14:03:51.130566 1255771 command_runner.go:130] >       "spec": null,
	I1114 14:03:51.130571 1255771 command_runner.go:130] >       "pinned": false
	I1114 14:03:51.130577 1255771 command_runner.go:130] >     },
	I1114 14:03:51.130582 1255771 command_runner.go:130] >     {
	I1114 14:03:51.130589 1255771 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1114 14:03:51.130596 1255771 command_runner.go:130] >       "repoTags": [
	I1114 14:03:51.130603 1255771 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1114 14:03:51.130608 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.130613 1255771 command_runner.go:130] >       "repoDigests": [
	I1114 14:03:51.130621 1255771 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1114 14:03:51.130631 1255771 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1114 14:03:51.130635 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.130640 1255771 command_runner.go:130] >       "size": "520014",
	I1114 14:03:51.130645 1255771 command_runner.go:130] >       "uid": {
	I1114 14:03:51.130650 1255771 command_runner.go:130] >         "value": "65535"
	I1114 14:03:51.130654 1255771 command_runner.go:130] >       },
	I1114 14:03:51.130659 1255771 command_runner.go:130] >       "username": "",
	I1114 14:03:51.130664 1255771 command_runner.go:130] >       "spec": null,
	I1114 14:03:51.130669 1255771 command_runner.go:130] >       "pinned": false
	I1114 14:03:51.130673 1255771 command_runner.go:130] >     }
	I1114 14:03:51.130678 1255771 command_runner.go:130] >   ]
	I1114 14:03:51.130682 1255771 command_runner.go:130] > }
	I1114 14:03:51.132377 1255771 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 14:03:51.132397 1255771 crio.go:415] Images already preloaded, skipping extraction
	I1114 14:03:51.132456 1255771 ssh_runner.go:195] Run: sudo crictl images --output json
	I1114 14:03:51.179863 1255771 command_runner.go:130] > {
	I1114 14:03:51.179885 1255771 command_runner.go:130] >   "images": [
	I1114 14:03:51.179890 1255771 command_runner.go:130] >     {
	I1114 14:03:51.179900 1255771 command_runner.go:130] >       "id": "04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26",
	I1114 14:03:51.179906 1255771 command_runner.go:130] >       "repoTags": [
	I1114 14:03:51.179914 1255771 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1114 14:03:51.179927 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.179939 1255771 command_runner.go:130] >       "repoDigests": [
	I1114 14:03:51.179949 1255771 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1114 14:03:51.179961 1255771 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"
	I1114 14:03:51.179971 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.179976 1255771 command_runner.go:130] >       "size": "60867618",
	I1114 14:03:51.179981 1255771 command_runner.go:130] >       "uid": null,
	I1114 14:03:51.179992 1255771 command_runner.go:130] >       "username": "",
	I1114 14:03:51.180003 1255771 command_runner.go:130] >       "spec": null,
	I1114 14:03:51.180010 1255771 command_runner.go:130] >       "pinned": false
	I1114 14:03:51.180016 1255771 command_runner.go:130] >     },
	I1114 14:03:51.180023 1255771 command_runner.go:130] >     {
	I1114 14:03:51.180032 1255771 command_runner.go:130] >       "id": "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1114 14:03:51.180040 1255771 command_runner.go:130] >       "repoTags": [
	I1114 14:03:51.180047 1255771 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1114 14:03:51.180052 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.180057 1255771 command_runner.go:130] >       "repoDigests": [
	I1114 14:03:51.180067 1255771 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1114 14:03:51.180077 1255771 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1114 14:03:51.180081 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.180090 1255771 command_runner.go:130] >       "size": "29037500",
	I1114 14:03:51.180095 1255771 command_runner.go:130] >       "uid": null,
	I1114 14:03:51.180100 1255771 command_runner.go:130] >       "username": "",
	I1114 14:03:51.180107 1255771 command_runner.go:130] >       "spec": null,
	I1114 14:03:51.180112 1255771 command_runner.go:130] >       "pinned": false
	I1114 14:03:51.180117 1255771 command_runner.go:130] >     },
	I1114 14:03:51.180121 1255771 command_runner.go:130] >     {
	I1114 14:03:51.180132 1255771 command_runner.go:130] >       "id": "97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108",
	I1114 14:03:51.180140 1255771 command_runner.go:130] >       "repoTags": [
	I1114 14:03:51.180148 1255771 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1114 14:03:51.180158 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.180163 1255771 command_runner.go:130] >       "repoDigests": [
	I1114 14:03:51.180174 1255771 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105",
	I1114 14:03:51.180187 1255771 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"
	I1114 14:03:51.180191 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.180197 1255771 command_runner.go:130] >       "size": "51393451",
	I1114 14:03:51.180204 1255771 command_runner.go:130] >       "uid": null,
	I1114 14:03:51.180209 1255771 command_runner.go:130] >       "username": "",
	I1114 14:03:51.180213 1255771 command_runner.go:130] >       "spec": null,
	I1114 14:03:51.180221 1255771 command_runner.go:130] >       "pinned": false
	I1114 14:03:51.180230 1255771 command_runner.go:130] >     },
	I1114 14:03:51.180235 1255771 command_runner.go:130] >     {
	I1114 14:03:51.180243 1255771 command_runner.go:130] >       "id": "9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace",
	I1114 14:03:51.180251 1255771 command_runner.go:130] >       "repoTags": [
	I1114 14:03:51.180257 1255771 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1114 14:03:51.180261 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.180266 1255771 command_runner.go:130] >       "repoDigests": [
	I1114 14:03:51.180286 1255771 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3",
	I1114 14:03:51.180298 1255771 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"
	I1114 14:03:51.180308 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.180317 1255771 command_runner.go:130] >       "size": "182203183",
	I1114 14:03:51.180322 1255771 command_runner.go:130] >       "uid": {
	I1114 14:03:51.180327 1255771 command_runner.go:130] >         "value": "0"
	I1114 14:03:51.180335 1255771 command_runner.go:130] >       },
	I1114 14:03:51.180340 1255771 command_runner.go:130] >       "username": "",
	I1114 14:03:51.180345 1255771 command_runner.go:130] >       "spec": null,
	I1114 14:03:51.180351 1255771 command_runner.go:130] >       "pinned": false
	I1114 14:03:51.180359 1255771 command_runner.go:130] >     },
	I1114 14:03:51.180363 1255771 command_runner.go:130] >     {
	I1114 14:03:51.180371 1255771 command_runner.go:130] >       "id": "537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7",
	I1114 14:03:51.180376 1255771 command_runner.go:130] >       "repoTags": [
	I1114 14:03:51.180384 1255771 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.3"
	I1114 14:03:51.180390 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.180396 1255771 command_runner.go:130] >       "repoDigests": [
	I1114 14:03:51.180407 1255771 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7055e7e0041a953d3fcec5950b88e8608ce09489f775dc0a8bd62a3300fd3ffa",
	I1114 14:03:51.180423 1255771 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"
	I1114 14:03:51.180433 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.180439 1255771 command_runner.go:130] >       "size": "121054158",
	I1114 14:03:51.180444 1255771 command_runner.go:130] >       "uid": {
	I1114 14:03:51.180449 1255771 command_runner.go:130] >         "value": "0"
	I1114 14:03:51.180457 1255771 command_runner.go:130] >       },
	I1114 14:03:51.180462 1255771 command_runner.go:130] >       "username": "",
	I1114 14:03:51.180467 1255771 command_runner.go:130] >       "spec": null,
	I1114 14:03:51.180474 1255771 command_runner.go:130] >       "pinned": false
	I1114 14:03:51.180479 1255771 command_runner.go:130] >     },
	I1114 14:03:51.180483 1255771 command_runner.go:130] >     {
	I1114 14:03:51.180491 1255771 command_runner.go:130] >       "id": "8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16",
	I1114 14:03:51.180499 1255771 command_runner.go:130] >       "repoTags": [
	I1114 14:03:51.180505 1255771 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.3"
	I1114 14:03:51.180510 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.180520 1255771 command_runner.go:130] >       "repoDigests": [
	I1114 14:03:51.180530 1255771 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707",
	I1114 14:03:51.180559 1255771 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c53671810fed4fd98b482a8e32f105585826221a4657ebd6181bc20becd3f0be"
	I1114 14:03:51.180567 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.180572 1255771 command_runner.go:130] >       "size": "117252916",
	I1114 14:03:51.180578 1255771 command_runner.go:130] >       "uid": {
	I1114 14:03:51.180586 1255771 command_runner.go:130] >         "value": "0"
	I1114 14:03:51.180590 1255771 command_runner.go:130] >       },
	I1114 14:03:51.180595 1255771 command_runner.go:130] >       "username": "",
	I1114 14:03:51.180603 1255771 command_runner.go:130] >       "spec": null,
	I1114 14:03:51.180614 1255771 command_runner.go:130] >       "pinned": false
	I1114 14:03:51.180618 1255771 command_runner.go:130] >     },
	I1114 14:03:51.180623 1255771 command_runner.go:130] >     {
	I1114 14:03:51.180635 1255771 command_runner.go:130] >       "id": "a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd",
	I1114 14:03:51.180640 1255771 command_runner.go:130] >       "repoTags": [
	I1114 14:03:51.180647 1255771 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.3"
	I1114 14:03:51.180655 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.180660 1255771 command_runner.go:130] >       "repoDigests": [
	I1114 14:03:51.180668 1255771 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:0228eb00239c0ea5f627a6191fc192f4e20606b57419ce9e2e0c1588f960b483",
	I1114 14:03:51.180680 1255771 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"
	I1114 14:03:51.180685 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.180695 1255771 command_runner.go:130] >       "size": "69926807",
	I1114 14:03:51.180704 1255771 command_runner.go:130] >       "uid": null,
	I1114 14:03:51.180709 1255771 command_runner.go:130] >       "username": "",
	I1114 14:03:51.180713 1255771 command_runner.go:130] >       "spec": null,
	I1114 14:03:51.180723 1255771 command_runner.go:130] >       "pinned": false
	I1114 14:03:51.180727 1255771 command_runner.go:130] >     },
	I1114 14:03:51.180732 1255771 command_runner.go:130] >     {
	I1114 14:03:51.180741 1255771 command_runner.go:130] >       "id": "42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314",
	I1114 14:03:51.180750 1255771 command_runner.go:130] >       "repoTags": [
	I1114 14:03:51.180756 1255771 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.3"
	I1114 14:03:51.180760 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.180765 1255771 command_runner.go:130] >       "repoDigests": [
	I1114 14:03:51.180805 1255771 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725",
	I1114 14:03:51.180820 1255771 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:c0c5cdf040306fccc833bfa847f74be0f6ea5c828ba6c2a443210f68aa9bdd7c"
	I1114 14:03:51.180825 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.180835 1255771 command_runner.go:130] >       "size": "59188020",
	I1114 14:03:51.180839 1255771 command_runner.go:130] >       "uid": {
	I1114 14:03:51.180844 1255771 command_runner.go:130] >         "value": "0"
	I1114 14:03:51.180851 1255771 command_runner.go:130] >       },
	I1114 14:03:51.180859 1255771 command_runner.go:130] >       "username": "",
	I1114 14:03:51.180869 1255771 command_runner.go:130] >       "spec": null,
	I1114 14:03:51.180874 1255771 command_runner.go:130] >       "pinned": false
	I1114 14:03:51.180879 1255771 command_runner.go:130] >     },
	I1114 14:03:51.180888 1255771 command_runner.go:130] >     {
	I1114 14:03:51.180895 1255771 command_runner.go:130] >       "id": "829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e",
	I1114 14:03:51.180905 1255771 command_runner.go:130] >       "repoTags": [
	I1114 14:03:51.180913 1255771 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1114 14:03:51.180917 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.180922 1255771 command_runner.go:130] >       "repoDigests": [
	I1114 14:03:51.180934 1255771 command_runner.go:130] >         "registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6",
	I1114 14:03:51.180943 1255771 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"
	I1114 14:03:51.180950 1255771 command_runner.go:130] >       ],
	I1114 14:03:51.180955 1255771 command_runner.go:130] >       "size": "520014",
	I1114 14:03:51.180960 1255771 command_runner.go:130] >       "uid": {
	I1114 14:03:51.180970 1255771 command_runner.go:130] >         "value": "65535"
	I1114 14:03:51.180977 1255771 command_runner.go:130] >       },
	I1114 14:03:51.180988 1255771 command_runner.go:130] >       "username": "",
	I1114 14:03:51.180993 1255771 command_runner.go:130] >       "spec": null,
	I1114 14:03:51.181004 1255771 command_runner.go:130] >       "pinned": false
	I1114 14:03:51.181008 1255771 command_runner.go:130] >     }
	I1114 14:03:51.181012 1255771 command_runner.go:130] >   ]
	I1114 14:03:51.181016 1255771 command_runner.go:130] > }
	I1114 14:03:51.184017 1255771 crio.go:496] all images are preloaded for cri-o runtime.
	I1114 14:03:51.184052 1255771 cache_images.go:84] Images are preloaded, skipping loading
	I1114 14:03:51.184129 1255771 ssh_runner.go:195] Run: crio config
	I1114 14:03:51.239796 1255771 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1114 14:03:51.239826 1255771 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1114 14:03:51.239835 1255771 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1114 14:03:51.239839 1255771 command_runner.go:130] > #
	I1114 14:03:51.239847 1255771 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1114 14:03:51.239855 1255771 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1114 14:03:51.239863 1255771 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1114 14:03:51.239877 1255771 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1114 14:03:51.239883 1255771 command_runner.go:130] > # reload'.
	I1114 14:03:51.239896 1255771 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1114 14:03:51.239910 1255771 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1114 14:03:51.239922 1255771 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1114 14:03:51.239936 1255771 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1114 14:03:51.239945 1255771 command_runner.go:130] > [crio]
	I1114 14:03:51.239952 1255771 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1114 14:03:51.239958 1255771 command_runner.go:130] > # containers images, in this directory.
	I1114 14:03:51.239967 1255771 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1114 14:03:51.239979 1255771 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1114 14:03:51.239985 1255771 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1114 14:03:51.239996 1255771 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1114 14:03:51.240007 1255771 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1114 14:03:51.240015 1255771 command_runner.go:130] > # storage_driver = "vfs"
	I1114 14:03:51.240023 1255771 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1114 14:03:51.240032 1255771 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1114 14:03:51.240037 1255771 command_runner.go:130] > # storage_option = [
	I1114 14:03:51.240041 1255771 command_runner.go:130] > # ]
	I1114 14:03:51.240052 1255771 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1114 14:03:51.240063 1255771 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1114 14:03:51.240069 1255771 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1114 14:03:51.240079 1255771 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1114 14:03:51.240091 1255771 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1114 14:03:51.240100 1255771 command_runner.go:130] > # always happen on a node reboot
	I1114 14:03:51.240107 1255771 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1114 14:03:51.240114 1255771 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1114 14:03:51.240123 1255771 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1114 14:03:51.240134 1255771 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1114 14:03:51.240145 1255771 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1114 14:03:51.240154 1255771 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1114 14:03:51.240167 1255771 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1114 14:03:51.240176 1255771 command_runner.go:130] > # internal_wipe = true
	I1114 14:03:51.240182 1255771 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1114 14:03:51.240193 1255771 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1114 14:03:51.240200 1255771 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1114 14:03:51.240206 1255771 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1114 14:03:51.240219 1255771 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1114 14:03:51.240228 1255771 command_runner.go:130] > [crio.api]
	I1114 14:03:51.240235 1255771 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1114 14:03:51.240244 1255771 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1114 14:03:51.240253 1255771 command_runner.go:130] > # IP address on which the stream server will listen.
	I1114 14:03:51.240262 1255771 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1114 14:03:51.240270 1255771 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1114 14:03:51.240279 1255771 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1114 14:03:51.240284 1255771 command_runner.go:130] > # stream_port = "0"
	I1114 14:03:51.240291 1255771 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1114 14:03:51.240300 1255771 command_runner.go:130] > # stream_enable_tls = false
	I1114 14:03:51.240309 1255771 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1114 14:03:51.240317 1255771 command_runner.go:130] > # stream_idle_timeout = ""
	I1114 14:03:51.240325 1255771 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1114 14:03:51.240339 1255771 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1114 14:03:51.240347 1255771 command_runner.go:130] > # minutes.
	I1114 14:03:51.240355 1255771 command_runner.go:130] > # stream_tls_cert = ""
	I1114 14:03:51.240365 1255771 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1114 14:03:51.240373 1255771 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1114 14:03:51.240382 1255771 command_runner.go:130] > # stream_tls_key = ""
	I1114 14:03:51.240389 1255771 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1114 14:03:51.240400 1255771 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1114 14:03:51.240412 1255771 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1114 14:03:51.240420 1255771 command_runner.go:130] > # stream_tls_ca = ""
	I1114 14:03:51.240429 1255771 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1114 14:03:51.240439 1255771 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1114 14:03:51.240448 1255771 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1114 14:03:51.240454 1255771 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1114 14:03:51.240495 1255771 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1114 14:03:51.240506 1255771 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1114 14:03:51.240514 1255771 command_runner.go:130] > [crio.runtime]
	I1114 14:03:51.240522 1255771 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1114 14:03:51.240531 1255771 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1114 14:03:51.240536 1255771 command_runner.go:130] > # "nofile=1024:2048"
	I1114 14:03:51.240563 1255771 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1114 14:03:51.240571 1255771 command_runner.go:130] > # default_ulimits = [
	I1114 14:03:51.240576 1255771 command_runner.go:130] > # ]
	I1114 14:03:51.240588 1255771 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1114 14:03:51.240599 1255771 command_runner.go:130] > # no_pivot = false
	I1114 14:03:51.240607 1255771 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1114 14:03:51.240628 1255771 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1114 14:03:51.240638 1255771 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1114 14:03:51.240646 1255771 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1114 14:03:51.240655 1255771 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1114 14:03:51.240668 1255771 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1114 14:03:51.240676 1255771 command_runner.go:130] > # conmon = ""
	I1114 14:03:51.240681 1255771 command_runner.go:130] > # Cgroup setting for conmon
	I1114 14:03:51.240694 1255771 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1114 14:03:51.240699 1255771 command_runner.go:130] > conmon_cgroup = "pod"
	I1114 14:03:51.240706 1255771 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1114 14:03:51.240716 1255771 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1114 14:03:51.240725 1255771 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1114 14:03:51.240733 1255771 command_runner.go:130] > # conmon_env = [
	I1114 14:03:51.240738 1255771 command_runner.go:130] > # ]
	I1114 14:03:51.240749 1255771 command_runner.go:130] > # Additional environment variables to set for all the
	I1114 14:03:51.240758 1255771 command_runner.go:130] > # containers. These are overridden if set in the
	I1114 14:03:51.240770 1255771 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1114 14:03:51.240775 1255771 command_runner.go:130] > # default_env = [
	I1114 14:03:51.240781 1255771 command_runner.go:130] > # ]
	I1114 14:03:51.240788 1255771 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1114 14:03:51.240797 1255771 command_runner.go:130] > # selinux = false
	I1114 14:03:51.240805 1255771 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1114 14:03:51.240816 1255771 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1114 14:03:51.240826 1255771 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1114 14:03:51.240837 1255771 command_runner.go:130] > # seccomp_profile = ""
	I1114 14:03:51.240848 1255771 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1114 14:03:51.240855 1255771 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1114 14:03:51.240864 1255771 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1114 14:03:51.240870 1255771 command_runner.go:130] > # which might increase security.
	I1114 14:03:51.240878 1255771 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1114 14:03:51.240889 1255771 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1114 14:03:51.240897 1255771 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1114 14:03:51.240907 1255771 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1114 14:03:51.240919 1255771 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1114 14:03:51.240928 1255771 command_runner.go:130] > # This option supports live configuration reload.
	I1114 14:03:51.240936 1255771 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1114 14:03:51.240949 1255771 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1114 14:03:51.240958 1255771 command_runner.go:130] > # the cgroup blockio controller.
	I1114 14:03:51.240966 1255771 command_runner.go:130] > # blockio_config_file = ""
	I1114 14:03:51.240974 1255771 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1114 14:03:51.240985 1255771 command_runner.go:130] > # irqbalance daemon.
	I1114 14:03:51.240995 1255771 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1114 14:03:51.241007 1255771 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1114 14:03:51.241016 1255771 command_runner.go:130] > # This option supports live configuration reload.
	I1114 14:03:51.241021 1255771 command_runner.go:130] > # rdt_config_file = ""
	I1114 14:03:51.241028 1255771 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1114 14:03:51.241033 1255771 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1114 14:03:51.241042 1255771 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1114 14:03:51.241050 1255771 command_runner.go:130] > # separate_pull_cgroup = ""
	I1114 14:03:51.241058 1255771 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1114 14:03:51.241069 1255771 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1114 14:03:51.241077 1255771 command_runner.go:130] > # will be added.
	I1114 14:03:51.241082 1255771 command_runner.go:130] > # default_capabilities = [
	I1114 14:03:51.241090 1255771 command_runner.go:130] > # 	"CHOWN",
	I1114 14:03:51.241096 1255771 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1114 14:03:51.241103 1255771 command_runner.go:130] > # 	"FSETID",
	I1114 14:03:51.241108 1255771 command_runner.go:130] > # 	"FOWNER",
	I1114 14:03:51.241112 1255771 command_runner.go:130] > # 	"SETGID",
	I1114 14:03:51.241120 1255771 command_runner.go:130] > # 	"SETUID",
	I1114 14:03:51.241124 1255771 command_runner.go:130] > # 	"SETPCAP",
	I1114 14:03:51.241133 1255771 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1114 14:03:51.241138 1255771 command_runner.go:130] > # 	"KILL",
	I1114 14:03:51.241145 1255771 command_runner.go:130] > # ]
	I1114 14:03:51.241154 1255771 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1114 14:03:51.241166 1255771 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1114 14:03:51.241177 1255771 command_runner.go:130] > # add_inheritable_capabilities = true
	I1114 14:03:51.241186 1255771 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1114 14:03:51.241194 1255771 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1114 14:03:51.241200 1255771 command_runner.go:130] > # default_sysctls = [
	I1114 14:03:51.241204 1255771 command_runner.go:130] > # ]
	I1114 14:03:51.241211 1255771 command_runner.go:130] > # List of devices on the host that a
	I1114 14:03:51.241222 1255771 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1114 14:03:51.241230 1255771 command_runner.go:130] > # allowed_devices = [
	I1114 14:03:51.241238 1255771 command_runner.go:130] > # 	"/dev/fuse",
	I1114 14:03:51.241243 1255771 command_runner.go:130] > # ]
	I1114 14:03:51.241253 1255771 command_runner.go:130] > # List of additional devices. specified as
	I1114 14:03:51.241300 1255771 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1114 14:03:51.241312 1255771 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1114 14:03:51.241320 1255771 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1114 14:03:51.241329 1255771 command_runner.go:130] > # additional_devices = [
	I1114 14:03:51.241337 1255771 command_runner.go:130] > # ]
	I1114 14:03:51.241344 1255771 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1114 14:03:51.241351 1255771 command_runner.go:130] > # cdi_spec_dirs = [
	I1114 14:03:51.241355 1255771 command_runner.go:130] > # 	"/etc/cdi",
	I1114 14:03:51.241361 1255771 command_runner.go:130] > # 	"/var/run/cdi",
	I1114 14:03:51.241368 1255771 command_runner.go:130] > # ]
	I1114 14:03:51.241375 1255771 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1114 14:03:51.241386 1255771 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1114 14:03:51.241396 1255771 command_runner.go:130] > # Defaults to false.
	I1114 14:03:51.241402 1255771 command_runner.go:130] > # device_ownership_from_security_context = false
	I1114 14:03:51.241415 1255771 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1114 14:03:51.241426 1255771 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1114 14:03:51.241431 1255771 command_runner.go:130] > # hooks_dir = [
	I1114 14:03:51.241437 1255771 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1114 14:03:51.241441 1255771 command_runner.go:130] > # ]
	I1114 14:03:51.241448 1255771 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1114 14:03:51.241458 1255771 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1114 14:03:51.241469 1255771 command_runner.go:130] > # its default mounts from the following two files:
	I1114 14:03:51.241477 1255771 command_runner.go:130] > #
	I1114 14:03:51.241485 1255771 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1114 14:03:51.241496 1255771 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1114 14:03:51.241506 1255771 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1114 14:03:51.241510 1255771 command_runner.go:130] > #
	I1114 14:03:51.241517 1255771 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1114 14:03:51.241525 1255771 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1114 14:03:51.241535 1255771 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1114 14:03:51.241545 1255771 command_runner.go:130] > #      only add mounts it finds in this file.
	I1114 14:03:51.241550 1255771 command_runner.go:130] > #
	I1114 14:03:51.241562 1255771 command_runner.go:130] > # default_mounts_file = ""
	I1114 14:03:51.241572 1255771 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1114 14:03:51.241584 1255771 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1114 14:03:51.241589 1255771 command_runner.go:130] > # pids_limit = 0
	I1114 14:03:51.241598 1255771 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1114 14:03:51.241605 1255771 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1114 14:03:51.241616 1255771 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1114 14:03:51.241626 1255771 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1114 14:03:51.241634 1255771 command_runner.go:130] > # log_size_max = -1
	I1114 14:03:51.241644 1255771 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1114 14:03:51.241654 1255771 command_runner.go:130] > # log_to_journald = false
	I1114 14:03:51.241665 1255771 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1114 14:03:51.241671 1255771 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1114 14:03:51.241677 1255771 command_runner.go:130] > # Path to directory for container attach sockets.
	I1114 14:03:51.241683 1255771 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1114 14:03:51.241693 1255771 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1114 14:03:51.241698 1255771 command_runner.go:130] > # bind_mount_prefix = ""
	I1114 14:03:51.241708 1255771 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1114 14:03:51.241719 1255771 command_runner.go:130] > # read_only = false
	I1114 14:03:51.241730 1255771 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1114 14:03:51.241741 1255771 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1114 14:03:51.241746 1255771 command_runner.go:130] > # live configuration reload.
	I1114 14:03:51.241751 1255771 command_runner.go:130] > # log_level = "info"
	I1114 14:03:51.241758 1255771 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1114 14:03:51.241764 1255771 command_runner.go:130] > # This option supports live configuration reload.
	I1114 14:03:51.241772 1255771 command_runner.go:130] > # log_filter = ""
	I1114 14:03:51.241780 1255771 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1114 14:03:51.241790 1255771 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1114 14:03:51.241799 1255771 command_runner.go:130] > # separated by comma.
	I1114 14:03:51.241804 1255771 command_runner.go:130] > # uid_mappings = ""
	I1114 14:03:51.241814 1255771 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1114 14:03:51.241825 1255771 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1114 14:03:51.241830 1255771 command_runner.go:130] > # separated by comma.
	I1114 14:03:51.241835 1255771 command_runner.go:130] > # gid_mappings = ""
	I1114 14:03:51.241842 1255771 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1114 14:03:51.241854 1255771 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1114 14:03:51.241865 1255771 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1114 14:03:51.241874 1255771 command_runner.go:130] > # minimum_mappable_uid = -1
	I1114 14:03:51.241881 1255771 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1114 14:03:51.241892 1255771 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1114 14:03:51.241902 1255771 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1114 14:03:51.241908 1255771 command_runner.go:130] > # minimum_mappable_gid = -1
	I1114 14:03:51.241915 1255771 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1114 14:03:51.241924 1255771 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1114 14:03:51.241938 1255771 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1114 14:03:51.241946 1255771 command_runner.go:130] > # ctr_stop_timeout = 30
	I1114 14:03:51.241953 1255771 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1114 14:03:51.241979 1255771 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1114 14:03:51.241990 1255771 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1114 14:03:51.241999 1255771 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1114 14:03:51.242005 1255771 command_runner.go:130] > # drop_infra_ctr = true
	I1114 14:03:51.242017 1255771 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1114 14:03:51.242028 1255771 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1114 14:03:51.242040 1255771 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1114 14:03:51.242050 1255771 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1114 14:03:51.242064 1255771 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1114 14:03:51.242070 1255771 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1114 14:03:51.242075 1255771 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1114 14:03:51.242084 1255771 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1114 14:03:51.242092 1255771 command_runner.go:130] > # pinns_path = ""
	I1114 14:03:51.242099 1255771 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1114 14:03:51.242110 1255771 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1114 14:03:51.242121 1255771 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1114 14:03:51.242129 1255771 command_runner.go:130] > # default_runtime = "runc"
	I1114 14:03:51.242136 1255771 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1114 14:03:51.242145 1255771 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1114 14:03:51.242156 1255771 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1114 14:03:51.242165 1255771 command_runner.go:130] > # creation as a file is not desired either.
	I1114 14:03:51.242176 1255771 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1114 14:03:51.242185 1255771 command_runner.go:130] > # the hostname is being managed dynamically.
	I1114 14:03:51.242191 1255771 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1114 14:03:51.242198 1255771 command_runner.go:130] > # ]
	I1114 14:03:51.242209 1255771 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1114 14:03:51.242220 1255771 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1114 14:03:51.242228 1255771 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1114 14:03:51.242236 1255771 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1114 14:03:51.242245 1255771 command_runner.go:130] > #
	I1114 14:03:51.242251 1255771 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1114 14:03:51.242260 1255771 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1114 14:03:51.242265 1255771 command_runner.go:130] > #  runtime_type = "oci"
	I1114 14:03:51.242274 1255771 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1114 14:03:51.242280 1255771 command_runner.go:130] > #  privileged_without_host_devices = false
	I1114 14:03:51.242289 1255771 command_runner.go:130] > #  allowed_annotations = []
	I1114 14:03:51.242293 1255771 command_runner.go:130] > # Where:
	I1114 14:03:51.242300 1255771 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1114 14:03:51.242308 1255771 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1114 14:03:51.242320 1255771 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1114 14:03:51.242327 1255771 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1114 14:03:51.242335 1255771 command_runner.go:130] > #   in $PATH.
	I1114 14:03:51.242343 1255771 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1114 14:03:51.242354 1255771 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1114 14:03:51.242365 1255771 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1114 14:03:51.242372 1255771 command_runner.go:130] > #   state.
	I1114 14:03:51.242380 1255771 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1114 14:03:51.242387 1255771 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1114 14:03:51.242395 1255771 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1114 14:03:51.242405 1255771 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1114 14:03:51.242412 1255771 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1114 14:03:51.242424 1255771 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1114 14:03:51.242433 1255771 command_runner.go:130] > #   The currently recognized values are:
	I1114 14:03:51.242441 1255771 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1114 14:03:51.242453 1255771 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1114 14:03:51.242460 1255771 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1114 14:03:51.242467 1255771 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1114 14:03:51.242476 1255771 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1114 14:03:51.242487 1255771 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1114 14:03:51.242499 1255771 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1114 14:03:51.242510 1255771 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1114 14:03:51.242522 1255771 command_runner.go:130] > #   should be moved to the container's cgroup
	I1114 14:03:51.242532 1255771 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1114 14:03:51.242538 1255771 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1114 14:03:51.242543 1255771 command_runner.go:130] > runtime_type = "oci"
	I1114 14:03:51.242548 1255771 command_runner.go:130] > runtime_root = "/run/runc"
	I1114 14:03:51.242552 1255771 command_runner.go:130] > runtime_config_path = ""
	I1114 14:03:51.242561 1255771 command_runner.go:130] > monitor_path = ""
	I1114 14:03:51.242566 1255771 command_runner.go:130] > monitor_cgroup = ""
	I1114 14:03:51.242574 1255771 command_runner.go:130] > monitor_exec_cgroup = ""
	I1114 14:03:51.242624 1255771 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1114 14:03:51.242635 1255771 command_runner.go:130] > # running containers
	I1114 14:03:51.242641 1255771 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1114 14:03:51.242649 1255771 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1114 14:03:51.242659 1255771 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1114 14:03:51.242670 1255771 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1114 14:03:51.242676 1255771 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1114 14:03:51.242685 1255771 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1114 14:03:51.242691 1255771 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1114 14:03:51.242698 1255771 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1114 14:03:51.242704 1255771 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1114 14:03:51.242709 1255771 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1114 14:03:51.242722 1255771 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1114 14:03:51.242732 1255771 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1114 14:03:51.242743 1255771 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1114 14:03:51.242756 1255771 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1114 14:03:51.242768 1255771 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1114 14:03:51.242775 1255771 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1114 14:03:51.242787 1255771 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1114 14:03:51.242799 1255771 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1114 14:03:51.242810 1255771 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1114 14:03:51.242822 1255771 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1114 14:03:51.242829 1255771 command_runner.go:130] > # Example:
	I1114 14:03:51.242835 1255771 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1114 14:03:51.242844 1255771 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1114 14:03:51.242850 1255771 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1114 14:03:51.242857 1255771 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1114 14:03:51.242864 1255771 command_runner.go:130] > # cpuset = 0
	I1114 14:03:51.242872 1255771 command_runner.go:130] > # cpushares = "0-1"
	I1114 14:03:51.242877 1255771 command_runner.go:130] > # Where:
	I1114 14:03:51.242885 1255771 command_runner.go:130] > # The workload name is workload-type.
	I1114 14:03:51.242894 1255771 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1114 14:03:51.242904 1255771 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1114 14:03:51.242914 1255771 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1114 14:03:51.242927 1255771 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1114 14:03:51.242934 1255771 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1114 14:03:51.242938 1255771 command_runner.go:130] > # 
	I1114 14:03:51.242946 1255771 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1114 14:03:51.242953 1255771 command_runner.go:130] > #
	I1114 14:03:51.242962 1255771 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1114 14:03:51.242973 1255771 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1114 14:03:51.242984 1255771 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1114 14:03:51.242995 1255771 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1114 14:03:51.243005 1255771 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1114 14:03:51.243009 1255771 command_runner.go:130] > [crio.image]
	I1114 14:03:51.243020 1255771 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1114 14:03:51.243025 1255771 command_runner.go:130] > # default_transport = "docker://"
	I1114 14:03:51.243036 1255771 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1114 14:03:51.243048 1255771 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1114 14:03:51.243057 1255771 command_runner.go:130] > # global_auth_file = ""
	I1114 14:03:51.243063 1255771 command_runner.go:130] > # The image used to instantiate infra containers.
	I1114 14:03:51.243073 1255771 command_runner.go:130] > # This option supports live configuration reload.
	I1114 14:03:51.243082 1255771 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1114 14:03:51.243090 1255771 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1114 14:03:51.243100 1255771 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1114 14:03:51.243106 1255771 command_runner.go:130] > # This option supports live configuration reload.
	I1114 14:03:51.243114 1255771 command_runner.go:130] > # pause_image_auth_file = ""
	I1114 14:03:51.243121 1255771 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1114 14:03:51.243132 1255771 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1114 14:03:51.243143 1255771 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1114 14:03:51.243153 1255771 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1114 14:03:51.243161 1255771 command_runner.go:130] > # pause_command = "/pause"
	I1114 14:03:51.243168 1255771 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1114 14:03:51.243178 1255771 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1114 14:03:51.243186 1255771 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1114 14:03:51.243197 1255771 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1114 14:03:51.243205 1255771 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1114 14:03:51.243214 1255771 command_runner.go:130] > # signature_policy = ""
	I1114 14:03:51.243239 1255771 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1114 14:03:51.243250 1255771 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1114 14:03:51.243255 1255771 command_runner.go:130] > # changing them here.
	I1114 14:03:51.243260 1255771 command_runner.go:130] > # insecure_registries = [
	I1114 14:03:51.243264 1255771 command_runner.go:130] > # ]
	I1114 14:03:51.243278 1255771 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1114 14:03:51.243289 1255771 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1114 14:03:51.243296 1255771 command_runner.go:130] > # image_volumes = "mkdir"
	I1114 14:03:51.243302 1255771 command_runner.go:130] > # Temporary directory to use for storing big files
	I1114 14:03:51.243307 1255771 command_runner.go:130] > # big_files_temporary_dir = ""
	I1114 14:03:51.243314 1255771 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1114 14:03:51.243319 1255771 command_runner.go:130] > # CNI plugins.
	I1114 14:03:51.243323 1255771 command_runner.go:130] > [crio.network]
	I1114 14:03:51.243331 1255771 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1114 14:03:51.243338 1255771 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1114 14:03:51.243343 1255771 command_runner.go:130] > # cni_default_network = ""
	I1114 14:03:51.243350 1255771 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1114 14:03:51.243355 1255771 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1114 14:03:51.243362 1255771 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1114 14:03:51.243366 1255771 command_runner.go:130] > # plugin_dirs = [
	I1114 14:03:51.243371 1255771 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1114 14:03:51.243374 1255771 command_runner.go:130] > # ]
	I1114 14:03:51.243381 1255771 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1114 14:03:51.243386 1255771 command_runner.go:130] > [crio.metrics]
	I1114 14:03:51.243391 1255771 command_runner.go:130] > # Globally enable or disable metrics support.
	I1114 14:03:51.243396 1255771 command_runner.go:130] > # enable_metrics = false
	I1114 14:03:51.243402 1255771 command_runner.go:130] > # Specify enabled metrics collectors.
	I1114 14:03:51.243407 1255771 command_runner.go:130] > # Per default all metrics are enabled.
	I1114 14:03:51.243414 1255771 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1114 14:03:51.243422 1255771 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1114 14:03:51.243429 1255771 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1114 14:03:51.243436 1255771 command_runner.go:130] > # metrics_collectors = [
	I1114 14:03:51.243441 1255771 command_runner.go:130] > # 	"operations",
	I1114 14:03:51.243446 1255771 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1114 14:03:51.243452 1255771 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1114 14:03:51.243457 1255771 command_runner.go:130] > # 	"operations_errors",
	I1114 14:03:51.243462 1255771 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1114 14:03:51.243467 1255771 command_runner.go:130] > # 	"image_pulls_by_name",
	I1114 14:03:51.243472 1255771 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1114 14:03:51.243477 1255771 command_runner.go:130] > # 	"image_pulls_failures",
	I1114 14:03:51.243482 1255771 command_runner.go:130] > # 	"image_pulls_successes",
	I1114 14:03:51.243487 1255771 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1114 14:03:51.243492 1255771 command_runner.go:130] > # 	"image_layer_reuse",
	I1114 14:03:51.243497 1255771 command_runner.go:130] > # 	"containers_oom_total",
	I1114 14:03:51.243501 1255771 command_runner.go:130] > # 	"containers_oom",
	I1114 14:03:51.243506 1255771 command_runner.go:130] > # 	"processes_defunct",
	I1114 14:03:51.243511 1255771 command_runner.go:130] > # 	"operations_total",
	I1114 14:03:51.243516 1255771 command_runner.go:130] > # 	"operations_latency_seconds",
	I1114 14:03:51.243521 1255771 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1114 14:03:51.243528 1255771 command_runner.go:130] > # 	"operations_errors_total",
	I1114 14:03:51.243534 1255771 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1114 14:03:51.243540 1255771 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1114 14:03:51.243546 1255771 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1114 14:03:51.243552 1255771 command_runner.go:130] > # 	"image_pulls_success_total",
	I1114 14:03:51.243557 1255771 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1114 14:03:51.243562 1255771 command_runner.go:130] > # 	"containers_oom_count_total",
	I1114 14:03:51.243566 1255771 command_runner.go:130] > # ]
	I1114 14:03:51.243572 1255771 command_runner.go:130] > # The port on which the metrics server will listen.
	I1114 14:03:51.243576 1255771 command_runner.go:130] > # metrics_port = 9090
	I1114 14:03:51.243582 1255771 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1114 14:03:51.243587 1255771 command_runner.go:130] > # metrics_socket = ""
	I1114 14:03:51.243593 1255771 command_runner.go:130] > # The certificate for the secure metrics server.
	I1114 14:03:51.243607 1255771 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1114 14:03:51.243615 1255771 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1114 14:03:51.243620 1255771 command_runner.go:130] > # certificate on any modification event.
	I1114 14:03:51.243625 1255771 command_runner.go:130] > # metrics_cert = ""
	I1114 14:03:51.243631 1255771 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1114 14:03:51.243640 1255771 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1114 14:03:51.243645 1255771 command_runner.go:130] > # metrics_key = ""
	I1114 14:03:51.243652 1255771 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1114 14:03:51.243656 1255771 command_runner.go:130] > [crio.tracing]
	I1114 14:03:51.243663 1255771 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1114 14:03:51.243667 1255771 command_runner.go:130] > # enable_tracing = false
	I1114 14:03:51.243674 1255771 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1114 14:03:51.243679 1255771 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1114 14:03:51.243685 1255771 command_runner.go:130] > # Number of samples to collect per million spans.
	I1114 14:03:51.243690 1255771 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1114 14:03:51.243698 1255771 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1114 14:03:51.243702 1255771 command_runner.go:130] > [crio.stats]
	I1114 14:03:51.243709 1255771 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1114 14:03:51.243715 1255771 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1114 14:03:51.243720 1255771 command_runner.go:130] > # stats_collection_period = 0
	I1114 14:03:51.244342 1255771 command_runner.go:130] ! time="2023-11-14 14:03:51.231727388Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1114 14:03:51.244367 1255771 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1114 14:03:51.244456 1255771 cni.go:84] Creating CNI manager for ""
	I1114 14:03:51.244468 1255771 cni.go:136] 1 nodes found, recommending kindnet
	I1114 14:03:51.244497 1255771 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 14:03:51.244519 1255771 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-683928 NodeName:multinode-683928 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 14:03:51.244684 1255771 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-683928"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 14:03:51.244768 1255771 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-683928 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-683928 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 14:03:51.244834 1255771 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 14:03:51.254673 1255771 command_runner.go:130] > kubeadm
	I1114 14:03:51.254695 1255771 command_runner.go:130] > kubectl
	I1114 14:03:51.254700 1255771 command_runner.go:130] > kubelet
	I1114 14:03:51.255952 1255771 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 14:03:51.256029 1255771 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1114 14:03:51.266578 1255771 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I1114 14:03:51.287937 1255771 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 14:03:51.309567 1255771 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I1114 14:03:51.331428 1255771 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1114 14:03:51.336279 1255771 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 14:03:51.350245 1255771 certs.go:56] Setting up /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928 for IP: 192.168.58.2
	I1114 14:03:51.350276 1255771 certs.go:190] acquiring lock for shared ca certs: {Name:mk1fdfc415c611904fd8e5ce757e79f4579c67a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:03:51.350461 1255771 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.key
	I1114 14:03:51.350505 1255771 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.key
	I1114 14:03:51.350554 1255771 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/client.key
	I1114 14:03:51.350570 1255771 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/client.crt with IP's: []
	I1114 14:03:51.924821 1255771 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/client.crt ...
	I1114 14:03:51.924857 1255771 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/client.crt: {Name:mke8f65c76e47c1c37df56fa3bb5f764caffa8c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:03:51.925079 1255771 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/client.key ...
	I1114 14:03:51.925094 1255771 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/client.key: {Name:mkc9c67564306b6694fe99b092c7ca84681fa925 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:03:51.925188 1255771 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/apiserver.key.cee25041
	I1114 14:03:51.925203 1255771 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1114 14:03:52.429463 1255771 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/apiserver.crt.cee25041 ...
	I1114 14:03:52.429498 1255771 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/apiserver.crt.cee25041: {Name:mk582a3faf215d589d49554e32b9d76edb3abe5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:03:52.429693 1255771 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/apiserver.key.cee25041 ...
	I1114 14:03:52.429710 1255771 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/apiserver.key.cee25041: {Name:mk642b7cab49d4b9dd1fdfb4cf487259648e8875 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:03:52.429795 1255771 certs.go:337] copying /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/apiserver.crt
	I1114 14:03:52.429882 1255771 certs.go:341] copying /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/apiserver.key
	I1114 14:03:52.429944 1255771 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/proxy-client.key
	I1114 14:03:52.429962 1255771 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/proxy-client.crt with IP's: []
	I1114 14:03:52.698937 1255771 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/proxy-client.crt ...
	I1114 14:03:52.698969 1255771 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/proxy-client.crt: {Name:mk0d12bf8901208c480675b6bfaa04a07c212af8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:03:52.699155 1255771 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/proxy-client.key ...
	I1114 14:03:52.699169 1255771 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/proxy-client.key: {Name:mke3129f95650e3822bed0c8b37073784db68afb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:03:52.699251 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1114 14:03:52.699271 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1114 14:03:52.699283 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1114 14:03:52.699298 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1114 14:03:52.699318 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1114 14:03:52.699335 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1114 14:03:52.699349 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1114 14:03:52.699362 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1114 14:03:52.699423 1255771 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/1191690.pem (1338 bytes)
	W1114 14:03:52.699465 1255771 certs.go:433] ignoring /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/1191690_empty.pem, impossibly tiny 0 bytes
	I1114 14:03:52.699478 1255771 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 14:03:52.699504 1255771 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem (1082 bytes)
	I1114 14:03:52.699532 1255771 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem (1123 bytes)
	I1114 14:03:52.699561 1255771 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem (1675 bytes)
	I1114 14:03:52.699622 1255771 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem (1708 bytes)
	I1114 14:03:52.699658 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:03:52.699675 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/1191690.pem -> /usr/share/ca-certificates/1191690.pem
	I1114 14:03:52.699689 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem -> /usr/share/ca-certificates/11916902.pem
	I1114 14:03:52.700315 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1114 14:03:52.730042 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1114 14:03:52.758993 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1114 14:03:52.787882 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1114 14:03:52.817380 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 14:03:52.849106 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 14:03:52.879339 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 14:03:52.909597 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1114 14:03:52.939232 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 14:03:52.968013 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/1191690.pem --> /usr/share/ca-certificates/1191690.pem (1338 bytes)
	I1114 14:03:52.996350 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem --> /usr/share/ca-certificates/11916902.pem (1708 bytes)
	I1114 14:03:53.025343 1255771 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1114 14:03:53.046910 1255771 ssh_runner.go:195] Run: openssl version
	I1114 14:03:53.053730 1255771 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1114 14:03:53.054120 1255771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 14:03:53.065952 1255771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:03:53.070617 1255771 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 14 13:34 /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:03:53.070644 1255771 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 13:34 /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:03:53.070717 1255771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:03:53.078831 1255771 command_runner.go:130] > b5213941
	I1114 14:03:53.079280 1255771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 14:03:53.090777 1255771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1191690.pem && ln -fs /usr/share/ca-certificates/1191690.pem /etc/ssl/certs/1191690.pem"
	I1114 14:03:53.102435 1255771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1191690.pem
	I1114 14:03:53.106995 1255771 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 14 13:42 /usr/share/ca-certificates/1191690.pem
	I1114 14:03:53.107077 1255771 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 13:42 /usr/share/ca-certificates/1191690.pem
	I1114 14:03:53.107154 1255771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1191690.pem
	I1114 14:03:53.115416 1255771 command_runner.go:130] > 51391683
	I1114 14:03:53.115840 1255771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1191690.pem /etc/ssl/certs/51391683.0"
	I1114 14:03:53.127683 1255771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11916902.pem && ln -fs /usr/share/ca-certificates/11916902.pem /etc/ssl/certs/11916902.pem"
	I1114 14:03:53.140429 1255771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11916902.pem
	I1114 14:03:53.144924 1255771 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 14 13:42 /usr/share/ca-certificates/11916902.pem
	I1114 14:03:53.145192 1255771 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 13:42 /usr/share/ca-certificates/11916902.pem
	I1114 14:03:53.145264 1255771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11916902.pem
	I1114 14:03:53.153234 1255771 command_runner.go:130] > 3ec20f2e
	I1114 14:03:53.153707 1255771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11916902.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 14:03:53.165704 1255771 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 14:03:53.170098 1255771 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 14:03:53.170136 1255771 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 14:03:53.170200 1255771 kubeadm.go:404] StartCluster: {Name:multinode-683928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-683928 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 14:03:53.170293 1255771 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1114 14:03:53.170371 1255771 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1114 14:03:53.212174 1255771 cri.go:89] found id: ""
	I1114 14:03:53.212248 1255771 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1114 14:03:53.222961 1255771 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1114 14:03:53.222990 1255771 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1114 14:03:53.222999 1255771 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1114 14:03:53.223090 1255771 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1114 14:03:53.234177 1255771 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1114 14:03:53.234266 1255771 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1114 14:03:53.245082 1255771 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1114 14:03:53.245153 1255771 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1114 14:03:53.245177 1255771 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1114 14:03:53.245194 1255771 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 14:03:53.245257 1255771 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1114 14:03:53.245304 1255771 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1114 14:03:53.344530 1255771 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1114 14:03:53.344631 1255771 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1114 14:03:53.427111 1255771 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 14:03:53.427145 1255771 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 14:04:09.891690 1255771 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1114 14:04:09.891716 1255771 command_runner.go:130] > [init] Using Kubernetes version: v1.28.3
	I1114 14:04:09.891755 1255771 kubeadm.go:322] [preflight] Running pre-flight checks
	I1114 14:04:09.891767 1255771 command_runner.go:130] > [preflight] Running pre-flight checks
	I1114 14:04:09.891849 1255771 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1114 14:04:09.891858 1255771 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1114 14:04:09.891909 1255771 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1049-aws
	I1114 14:04:09.891918 1255771 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1049-aws
	I1114 14:04:09.891950 1255771 kubeadm.go:322] OS: Linux
	I1114 14:04:09.891959 1255771 command_runner.go:130] > OS: Linux
	I1114 14:04:09.892001 1255771 kubeadm.go:322] CGROUPS_CPU: enabled
	I1114 14:04:09.892009 1255771 command_runner.go:130] > CGROUPS_CPU: enabled
	I1114 14:04:09.892054 1255771 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I1114 14:04:09.892062 1255771 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1114 14:04:09.892106 1255771 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1114 14:04:09.892114 1255771 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1114 14:04:09.892159 1255771 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1114 14:04:09.892166 1255771 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1114 14:04:09.892210 1255771 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1114 14:04:09.892219 1255771 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1114 14:04:09.892263 1255771 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1114 14:04:09.892271 1255771 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1114 14:04:09.892313 1255771 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1114 14:04:09.892322 1255771 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1114 14:04:09.892367 1255771 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1114 14:04:09.892375 1255771 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1114 14:04:09.892418 1255771 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I1114 14:04:09.892426 1255771 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1114 14:04:09.892493 1255771 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 14:04:09.892501 1255771 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1114 14:04:09.892606 1255771 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 14:04:09.892616 1255771 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1114 14:04:09.892702 1255771 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 14:04:09.892714 1255771 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1114 14:04:09.892771 1255771 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 14:04:09.892903 1255771 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1114 14:04:09.895145 1255771 out.go:204]   - Generating certificates and keys ...
	I1114 14:04:09.895261 1255771 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1114 14:04:09.895277 1255771 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1114 14:04:09.895345 1255771 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1114 14:04:09.895352 1255771 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1114 14:04:09.895435 1255771 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1114 14:04:09.895454 1255771 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1114 14:04:09.895521 1255771 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1114 14:04:09.895534 1255771 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1114 14:04:09.895593 1255771 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1114 14:04:09.895611 1255771 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1114 14:04:09.895680 1255771 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1114 14:04:09.895691 1255771 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1114 14:04:09.895762 1255771 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1114 14:04:09.895776 1255771 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1114 14:04:09.895906 1255771 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-683928] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1114 14:04:09.895917 1255771 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-683928] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1114 14:04:09.895968 1255771 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1114 14:04:09.895981 1255771 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1114 14:04:09.896117 1255771 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-683928] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1114 14:04:09.896130 1255771 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-683928] and IPs [192.168.58.2 127.0.0.1 ::1]
	I1114 14:04:09.896207 1255771 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1114 14:04:09.896216 1255771 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1114 14:04:09.896285 1255771 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1114 14:04:09.896291 1255771 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1114 14:04:09.896333 1255771 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1114 14:04:09.896337 1255771 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1114 14:04:09.896414 1255771 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 14:04:09.896426 1255771 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1114 14:04:09.896476 1255771 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 14:04:09.896485 1255771 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1114 14:04:09.896567 1255771 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 14:04:09.896577 1255771 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1114 14:04:09.896639 1255771 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 14:04:09.896642 1255771 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1114 14:04:09.896711 1255771 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 14:04:09.896717 1255771 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1114 14:04:09.896793 1255771 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 14:04:09.896800 1255771 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1114 14:04:09.896863 1255771 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 14:04:09.896868 1255771 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1114 14:04:09.899923 1255771 out.go:204]   - Booting up control plane ...
	I1114 14:04:09.900036 1255771 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 14:04:09.900044 1255771 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1114 14:04:09.900179 1255771 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 14:04:09.900205 1255771 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1114 14:04:09.900292 1255771 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 14:04:09.900303 1255771 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1114 14:04:09.900414 1255771 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 14:04:09.900424 1255771 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 14:04:09.900505 1255771 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 14:04:09.900509 1255771 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 14:04:09.900590 1255771 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1114 14:04:09.900597 1255771 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1114 14:04:09.900752 1255771 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 14:04:09.900768 1255771 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1114 14:04:09.900843 1255771 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.502256 seconds
	I1114 14:04:09.900851 1255771 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502256 seconds
	I1114 14:04:09.900955 1255771 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 14:04:09.900967 1255771 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1114 14:04:09.901090 1255771 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 14:04:09.901099 1255771 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1114 14:04:09.901181 1255771 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1114 14:04:09.901220 1255771 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1114 14:04:09.901458 1255771 command_runner.go:130] > [mark-control-plane] Marking the node multinode-683928 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1114 14:04:09.901479 1255771 kubeadm.go:322] [mark-control-plane] Marking the node multinode-683928 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1114 14:04:09.901558 1255771 command_runner.go:130] > [bootstrap-token] Using token: q42rbi.6sdx9tzqgcgjjip2
	I1114 14:04:09.901566 1255771 kubeadm.go:322] [bootstrap-token] Using token: q42rbi.6sdx9tzqgcgjjip2
	I1114 14:04:09.903637 1255771 out.go:204]   - Configuring RBAC rules ...
	I1114 14:04:09.903860 1255771 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 14:04:09.903888 1255771 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1114 14:04:09.904016 1255771 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 14:04:09.904039 1255771 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1114 14:04:09.904234 1255771 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 14:04:09.904255 1255771 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1114 14:04:09.904442 1255771 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 14:04:09.904475 1255771 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1114 14:04:09.904674 1255771 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 14:04:09.904703 1255771 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1114 14:04:09.904881 1255771 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 14:04:09.904909 1255771 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1114 14:04:09.905053 1255771 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 14:04:09.905067 1255771 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1114 14:04:09.905154 1255771 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1114 14:04:09.905175 1255771 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1114 14:04:09.905236 1255771 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1114 14:04:09.905247 1255771 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1114 14:04:09.905251 1255771 kubeadm.go:322] 
	I1114 14:04:09.905314 1255771 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1114 14:04:09.905324 1255771 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1114 14:04:09.905330 1255771 kubeadm.go:322] 
	I1114 14:04:09.905408 1255771 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1114 14:04:09.905416 1255771 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1114 14:04:09.905420 1255771 kubeadm.go:322] 
	I1114 14:04:09.905458 1255771 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1114 14:04:09.905483 1255771 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1114 14:04:09.905570 1255771 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 14:04:09.905596 1255771 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1114 14:04:09.905672 1255771 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 14:04:09.905700 1255771 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1114 14:04:09.905727 1255771 kubeadm.go:322] 
	I1114 14:04:09.905812 1255771 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1114 14:04:09.905826 1255771 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1114 14:04:09.905871 1255771 kubeadm.go:322] 
	I1114 14:04:09.905959 1255771 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1114 14:04:09.905983 1255771 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1114 14:04:09.906007 1255771 kubeadm.go:322] 
	I1114 14:04:09.906061 1255771 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1114 14:04:09.906085 1255771 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1114 14:04:09.906198 1255771 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 14:04:09.906231 1255771 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1114 14:04:09.906333 1255771 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 14:04:09.906355 1255771 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1114 14:04:09.906388 1255771 kubeadm.go:322] 
	I1114 14:04:09.906508 1255771 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1114 14:04:09.906532 1255771 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1114 14:04:09.906652 1255771 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1114 14:04:09.906674 1255771 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1114 14:04:09.906707 1255771 kubeadm.go:322] 
	I1114 14:04:09.906826 1255771 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token q42rbi.6sdx9tzqgcgjjip2 \
	I1114 14:04:09.906848 1255771 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token q42rbi.6sdx9tzqgcgjjip2 \
	I1114 14:04:09.906989 1255771 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1a1b25420be6487c50639ce0b981e16ee30b54e658d487c3adf6952ff2c4a2c6 \
	I1114 14:04:09.907011 1255771 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1a1b25420be6487c50639ce0b981e16ee30b54e658d487c3adf6952ff2c4a2c6 \
	I1114 14:04:09.907060 1255771 command_runner.go:130] > 	--control-plane 
	I1114 14:04:09.907093 1255771 kubeadm.go:322] 	--control-plane 
	I1114 14:04:09.907128 1255771 kubeadm.go:322] 
	I1114 14:04:09.907243 1255771 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1114 14:04:09.907266 1255771 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1114 14:04:09.907298 1255771 kubeadm.go:322] 
	I1114 14:04:09.907413 1255771 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token q42rbi.6sdx9tzqgcgjjip2 \
	I1114 14:04:09.907435 1255771 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token q42rbi.6sdx9tzqgcgjjip2 \
	I1114 14:04:09.907575 1255771 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1a1b25420be6487c50639ce0b981e16ee30b54e658d487c3adf6952ff2c4a2c6 
	I1114 14:04:09.907597 1255771 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:1a1b25420be6487c50639ce0b981e16ee30b54e658d487c3adf6952ff2c4a2c6 
	I1114 14:04:09.907655 1255771 cni.go:84] Creating CNI manager for ""
	I1114 14:04:09.907687 1255771 cni.go:136] 1 nodes found, recommending kindnet
	I1114 14:04:09.910208 1255771 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1114 14:04:09.912111 1255771 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1114 14:04:09.924424 1255771 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1114 14:04:09.924454 1255771 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1114 14:04:09.924463 1255771 command_runner.go:130] > Device: 3ah/58d	Inode: 1575160     Links: 1
	I1114 14:04:09.924470 1255771 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1114 14:04:09.924477 1255771 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1114 14:04:09.924483 1255771 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1114 14:04:09.924489 1255771 command_runner.go:130] > Change: 2023-11-14 13:34:27.093734456 +0000
	I1114 14:04:09.924496 1255771 command_runner.go:130] >  Birth: 2023-11-14 13:34:27.053734656 +0000
	I1114 14:04:09.925957 1255771 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1114 14:04:09.925974 1255771 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1114 14:04:10.001276 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1114 14:04:10.846270 1255771 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1114 14:04:10.853571 1255771 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1114 14:04:10.863278 1255771 command_runner.go:130] > serviceaccount/kindnet created
	I1114 14:04:10.879340 1255771 command_runner.go:130] > daemonset.apps/kindnet created
	I1114 14:04:10.885130 1255771 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1114 14:04:10.885251 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:10.885346 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=6d8573efb5a7770e21024de23a29d810b200278b minikube.k8s.io/name=multinode-683928 minikube.k8s.io/updated_at=2023_11_14T14_04_10_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:11.094237 1255771 command_runner.go:130] > node/multinode-683928 labeled
	I1114 14:04:11.097539 1255771 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1114 14:04:11.097625 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:11.097672 1255771 command_runner.go:130] > -16
	I1114 14:04:11.097681 1255771 ops.go:34] apiserver oom_adj: -16
	I1114 14:04:11.212922 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:11.213015 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:11.307753 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:11.808596 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:11.906019 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:12.308050 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:12.398683 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:12.808796 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:12.903404 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:13.308064 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:13.395760 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:13.809004 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:13.912110 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:14.308758 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:14.399425 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:14.808000 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:14.898746 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:15.308068 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:15.400907 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:15.808030 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:15.898231 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:16.308099 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:16.408840 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:16.808141 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:16.898089 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:17.308664 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:17.400484 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:17.808659 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:17.902077 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:18.308815 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:18.409931 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:18.808576 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:18.904944 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:19.308637 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:19.400147 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:19.808000 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:19.897683 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:20.307974 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:20.402849 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:20.808062 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:20.914376 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:21.307974 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:21.398878 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:21.808386 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:21.916061 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:22.308640 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:22.422887 1255771 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1114 14:04:22.808045 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1114 14:04:23.036216 1255771 command_runner.go:130] > NAME      SECRETS   AGE
	I1114 14:04:23.036236 1255771 command_runner.go:130] > default   0         1s
	I1114 14:04:23.037864 1255771 kubeadm.go:1081] duration metric: took 12.15268871s to wait for elevateKubeSystemPrivileges.
	I1114 14:04:23.037889 1255771 kubeadm.go:406] StartCluster complete in 29.867694215s
	I1114 14:04:23.037906 1255771 settings.go:142] acquiring lock: {Name:mk8b1f62ebfea123b4e39d0037f993206e354b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:04:23.037973 1255771 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 14:04:23.038754 1255771 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17581-1186318/kubeconfig: {Name:mkf1191f735848932fc7f3417e1088220acbc478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:04:23.039278 1255771 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 14:04:23.039530 1255771 kapi.go:59] client config for multinode-683928: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/client.key", CAFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 14:04:23.041058 1255771 config.go:182] Loaded profile config "multinode-683928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 14:04:23.041114 1255771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1114 14:04:23.041327 1255771 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1114 14:04:23.041406 1255771 addons.go:69] Setting storage-provisioner=true in profile "multinode-683928"
	I1114 14:04:23.041424 1255771 addons.go:231] Setting addon storage-provisioner=true in "multinode-683928"
	I1114 14:04:23.041462 1255771 host.go:66] Checking if "multinode-683928" exists ...
	I1114 14:04:23.041930 1255771 cli_runner.go:164] Run: docker container inspect multinode-683928 --format={{.State.Status}}
	I1114 14:04:23.042308 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1114 14:04:23.042340 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:23.042365 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:23.042400 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:23.042655 1255771 cert_rotation.go:137] Starting client certificate rotation controller
	I1114 14:04:23.043126 1255771 addons.go:69] Setting default-storageclass=true in profile "multinode-683928"
	I1114 14:04:23.043176 1255771 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-683928"
	I1114 14:04:23.043546 1255771 cli_runner.go:164] Run: docker container inspect multinode-683928 --format={{.State.Status}}
	I1114 14:04:23.069220 1255771 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 14:04:23.069480 1255771 kapi.go:59] client config for multinode-683928: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/client.key", CAFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 14:04:23.069722 1255771 addons.go:231] Setting addon default-storageclass=true in "multinode-683928"
	I1114 14:04:23.069749 1255771 host.go:66] Checking if "multinode-683928" exists ...
	I1114 14:04:23.070203 1255771 cli_runner.go:164] Run: docker container inspect multinode-683928 --format={{.State.Status}}
	I1114 14:04:23.086747 1255771 round_trippers.go:574] Response Status: 200 OK in 44 milliseconds
	I1114 14:04:23.086770 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:23.086779 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:23 GMT
	I1114 14:04:23.086785 1255771 round_trippers.go:580]     Audit-Id: e4391dc2-a58f-401a-b803-a5c8f0ecf6eb
	I1114 14:04:23.086791 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:23.086798 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:23.086804 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:23.086810 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:23.086817 1255771 round_trippers.go:580]     Content-Length: 291
	I1114 14:04:23.088029 1255771 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ebff7a46-980b-417a-ba6d-f7dd75dbc9ce","resourceVersion":"353","creationTimestamp":"2023-11-14T14:04:09Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1114 14:04:23.088495 1255771 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ebff7a46-980b-417a-ba6d-f7dd75dbc9ce","resourceVersion":"353","creationTimestamp":"2023-11-14T14:04:09Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1114 14:04:23.088610 1255771 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1114 14:04:23.088620 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:23.088629 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:23.088636 1255771 round_trippers.go:473]     Content-Type: application/json
	I1114 14:04:23.088642 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:23.119208 1255771 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1114 14:04:23.122378 1255771 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 14:04:23.122403 1255771 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1114 14:04:23.122472 1255771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928
	I1114 14:04:23.131324 1255771 round_trippers.go:574] Response Status: 200 OK in 42 milliseconds
	I1114 14:04:23.131348 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:23.131357 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:23 GMT
	I1114 14:04:23.131363 1255771 round_trippers.go:580]     Audit-Id: 8ad62f98-5b9e-4180-8564-293303aea135
	I1114 14:04:23.131369 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:23.131376 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:23.131382 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:23.131389 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:23.131395 1255771 round_trippers.go:580]     Content-Length: 291
	I1114 14:04:23.131490 1255771 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ebff7a46-980b-417a-ba6d-f7dd75dbc9ce","resourceVersion":"356","creationTimestamp":"2023-11-14T14:04:09Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1114 14:04:23.131646 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1114 14:04:23.131655 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:23.131663 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:23.131669 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:23.150143 1255771 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1114 14:04:23.150169 1255771 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1114 14:04:23.150235 1255771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928
	I1114 14:04:23.164749 1255771 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I1114 14:04:23.164776 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:23.164786 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:23.164792 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:23.164798 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:23.164805 1255771 round_trippers.go:580]     Content-Length: 291
	I1114 14:04:23.164811 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:23 GMT
	I1114 14:04:23.164822 1255771 round_trippers.go:580]     Audit-Id: 6edfef79-4a77-44b0-bbcf-aa6a2eb7c08f
	I1114 14:04:23.164828 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:23.164851 1255771 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ebff7a46-980b-417a-ba6d-f7dd75dbc9ce","resourceVersion":"356","creationTimestamp":"2023-11-14T14:04:09Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I1114 14:04:23.164957 1255771 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-683928" context rescaled to 1 replicas
	I1114 14:04:23.164993 1255771 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1114 14:04:23.171713 1255771 out.go:177] * Verifying Kubernetes components...
	I1114 14:04:23.166804 1255771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928/id_rsa Username:docker}
	I1114 14:04:23.174907 1255771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:04:23.206038 1255771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928/id_rsa Username:docker}
	I1114 14:04:23.360324 1255771 command_runner.go:130] > apiVersion: v1
	I1114 14:04:23.360346 1255771 command_runner.go:130] > data:
	I1114 14:04:23.360360 1255771 command_runner.go:130] >   Corefile: |
	I1114 14:04:23.360365 1255771 command_runner.go:130] >     .:53 {
	I1114 14:04:23.360370 1255771 command_runner.go:130] >         errors
	I1114 14:04:23.360376 1255771 command_runner.go:130] >         health {
	I1114 14:04:23.360382 1255771 command_runner.go:130] >            lameduck 5s
	I1114 14:04:23.360390 1255771 command_runner.go:130] >         }
	I1114 14:04:23.360395 1255771 command_runner.go:130] >         ready
	I1114 14:04:23.360409 1255771 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1114 14:04:23.360415 1255771 command_runner.go:130] >            pods insecure
	I1114 14:04:23.360424 1255771 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1114 14:04:23.360430 1255771 command_runner.go:130] >            ttl 30
	I1114 14:04:23.360445 1255771 command_runner.go:130] >         }
	I1114 14:04:23.360454 1255771 command_runner.go:130] >         prometheus :9153
	I1114 14:04:23.360461 1255771 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1114 14:04:23.360467 1255771 command_runner.go:130] >            max_concurrent 1000
	I1114 14:04:23.360473 1255771 command_runner.go:130] >         }
	I1114 14:04:23.360478 1255771 command_runner.go:130] >         cache 30
	I1114 14:04:23.360485 1255771 command_runner.go:130] >         loop
	I1114 14:04:23.360491 1255771 command_runner.go:130] >         reload
	I1114 14:04:23.360500 1255771 command_runner.go:130] >         loadbalance
	I1114 14:04:23.360504 1255771 command_runner.go:130] >     }
	I1114 14:04:23.360513 1255771 command_runner.go:130] > kind: ConfigMap
	I1114 14:04:23.360518 1255771 command_runner.go:130] > metadata:
	I1114 14:04:23.360528 1255771 command_runner.go:130] >   creationTimestamp: "2023-11-14T14:04:09Z"
	I1114 14:04:23.360538 1255771 command_runner.go:130] >   name: coredns
	I1114 14:04:23.360588 1255771 command_runner.go:130] >   namespace: kube-system
	I1114 14:04:23.360598 1255771 command_runner.go:130] >   resourceVersion: "227"
	I1114 14:04:23.360605 1255771 command_runner.go:130] >   uid: 25ac4a9b-2283-4033-93a7-bc471ee2b217
	I1114 14:04:23.364045 1255771 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1114 14:04:23.364473 1255771 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 14:04:23.364764 1255771 kapi.go:59] client config for multinode-683928: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/client.key", CAFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 14:04:23.365061 1255771 node_ready.go:35] waiting up to 6m0s for node "multinode-683928" to be "Ready" ...
	I1114 14:04:23.365160 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:23.365171 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:23.365181 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:23.365188 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:23.382433 1255771 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1114 14:04:23.382461 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:23.382470 1255771 round_trippers.go:580]     Audit-Id: a90ebb8f-ca80-4dd9-95e9-8a1f3e5235f9
	I1114 14:04:23.382477 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:23.382484 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:23.382490 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:23.382497 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:23.382503 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:23 GMT
	I1114 14:04:23.382769 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:23.383486 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:23.383508 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:23.383517 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:23.383525 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:23.419331 1255771 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1114 14:04:23.421851 1255771 round_trippers.go:574] Response Status: 200 OK in 38 milliseconds
	I1114 14:04:23.421920 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:23.421942 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:23.421962 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:23.421997 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:23 GMT
	I1114 14:04:23.422024 1255771 round_trippers.go:580]     Audit-Id: 8a860724-cd42-498f-adb0-80c2eb607df2
	I1114 14:04:23.422045 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:23.422079 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:23.425348 1255771 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1114 14:04:23.427834 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:23.929112 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:23.929190 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:23.929223 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:23.929248 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:23.936620 1255771 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1114 14:04:23.936720 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:23.936743 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:23.936781 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:23.936801 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:23.936824 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:23 GMT
	I1114 14:04:23.936855 1255771 round_trippers.go:580]     Audit-Id: 72cb6941-6251-4604-a738-6fc7c6932710
	I1114 14:04:23.936879 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:23.937026 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:23.985392 1255771 command_runner.go:130] > configmap/coredns replaced
	I1114 14:04:23.987426 1255771 start.go:926] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I1114 14:04:24.133356 1255771 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1114 14:04:24.141768 1255771 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1114 14:04:24.151482 1255771 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1114 14:04:24.162612 1255771 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1114 14:04:24.175109 1255771 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1114 14:04:24.190584 1255771 command_runner.go:130] > pod/storage-provisioner created
	I1114 14:04:24.196523 1255771 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1114 14:04:24.196686 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I1114 14:04:24.196697 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:24.196706 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:24.196713 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:24.205890 1255771 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1114 14:04:24.205919 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:24.205929 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:24.205936 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:24.205942 1255771 round_trippers.go:580]     Content-Length: 1273
	I1114 14:04:24.205949 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:24 GMT
	I1114 14:04:24.205955 1255771 round_trippers.go:580]     Audit-Id: 977c9b6d-26b8-42a6-bef1-40f938ca6c37
	I1114 14:04:24.205966 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:24.205972 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:24.206035 1255771 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"384"},"items":[{"metadata":{"name":"standard","uid":"d4f10dd6-a295-40cc-8d30-29f164c057a5","resourceVersion":"372","creationTimestamp":"2023-11-14T14:04:23Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-14T14:04:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1114 14:04:24.206439 1255771 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d4f10dd6-a295-40cc-8d30-29f164c057a5","resourceVersion":"372","creationTimestamp":"2023-11-14T14:04:23Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-14T14:04:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1114 14:04:24.206494 1255771 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1114 14:04:24.206506 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:24.206515 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:24.206522 1255771 round_trippers.go:473]     Content-Type: application/json
	I1114 14:04:24.206531 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:24.210790 1255771 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 14:04:24.210817 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:24.210827 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:24.210833 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:24.210843 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:24.210850 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:24.210860 1255771 round_trippers.go:580]     Content-Length: 1220
	I1114 14:04:24.210866 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:24 GMT
	I1114 14:04:24.210873 1255771 round_trippers.go:580]     Audit-Id: 1c2ed169-34ff-4777-8c8c-9aad2205c53f
	I1114 14:04:24.210912 1255771 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d4f10dd6-a295-40cc-8d30-29f164c057a5","resourceVersion":"372","creationTimestamp":"2023-11-14T14:04:23Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-14T14:04:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1114 14:04:24.214628 1255771 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1114 14:04:24.216450 1255771 addons.go:502] enable addons completed in 1.175112556s: enabled=[storage-provisioner default-storageclass]
	I1114 14:04:24.429237 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:24.429262 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:24.429272 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:24.429302 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:24.431843 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:24.431868 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:24.431886 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:24.431893 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:24.431900 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:24.431906 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:24.431912 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:24 GMT
	I1114 14:04:24.431921 1255771 round_trippers.go:580]     Audit-Id: f7dc482b-c102-4a79-b6af-78e1eefa5d31
	I1114 14:04:24.432209 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:24.929524 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:24.929547 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:24.929557 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:24.929564 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:24.932204 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:24.932275 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:24.932297 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:24 GMT
	I1114 14:04:24.932320 1255771 round_trippers.go:580]     Audit-Id: de34cadd-c35e-42af-ab72-b77207000888
	I1114 14:04:24.932357 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:24.932374 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:24.932381 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:24.932387 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:24.932534 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:25.428962 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:25.428992 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:25.429002 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:25.429009 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:25.431532 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:25.431557 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:25.431566 1255771 round_trippers.go:580]     Audit-Id: b522cd4c-79b6-476a-86c9-9aa5177b232f
	I1114 14:04:25.431572 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:25.431578 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:25.431584 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:25.431591 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:25.431597 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:25 GMT
	I1114 14:04:25.431816 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:25.432232 1255771 node_ready.go:58] node "multinode-683928" has status "Ready":"False"
	I1114 14:04:25.928773 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:25.928800 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:25.928811 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:25.928818 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:25.931466 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:25.931491 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:25.931500 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:25.931507 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:25.931513 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:25 GMT
	I1114 14:04:25.931521 1255771 round_trippers.go:580]     Audit-Id: 95f2b615-d989-4296-bba1-f57342fbe780
	I1114 14:04:25.931527 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:25.931533 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:25.931877 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:26.429100 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:26.429124 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:26.429135 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:26.429142 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:26.431666 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:26.431689 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:26.431699 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:26.431705 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:26.431712 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:26 GMT
	I1114 14:04:26.431719 1255771 round_trippers.go:580]     Audit-Id: 9a203768-cfb3-4228-94d9-0069d80c07b5
	I1114 14:04:26.431725 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:26.431731 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:26.431957 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:26.929126 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:26.929150 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:26.929160 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:26.929168 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:26.931705 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:26.931729 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:26.931782 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:26 GMT
	I1114 14:04:26.931789 1255771 round_trippers.go:580]     Audit-Id: e446a986-b88b-4462-88d2-818fe5e69bad
	I1114 14:04:26.931797 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:26.931806 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:26.931813 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:26.931827 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:26.932119 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:27.429242 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:27.429269 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:27.429279 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:27.429286 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:27.431820 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:27.431845 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:27.431853 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:27.431859 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:27.431866 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:27.431872 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:27.431879 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:27 GMT
	I1114 14:04:27.431885 1255771 round_trippers.go:580]     Audit-Id: 8e3d5c68-dc47-4055-abc7-94d1b4f86555
	I1114 14:04:27.432032 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:27.432432 1255771 node_ready.go:58] node "multinode-683928" has status "Ready":"False"
	I1114 14:04:27.929140 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:27.929160 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:27.929171 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:27.929179 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:27.931728 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:27.931783 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:27.931792 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:27.931799 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:27.931806 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:27 GMT
	I1114 14:04:27.931813 1255771 round_trippers.go:580]     Audit-Id: 929af3ec-6961-4021-a2b9-8fb1a3a66e0d
	I1114 14:04:27.931820 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:27.931826 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:27.931945 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:28.429480 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:28.429507 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:28.429518 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:28.429526 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:28.432144 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:28.432214 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:28.432236 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:28.432258 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:28.432291 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:28.432313 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:28.432335 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:28 GMT
	I1114 14:04:28.432356 1255771 round_trippers.go:580]     Audit-Id: b984c768-358b-44c7-85b4-22c9bfb22cd1
	I1114 14:04:28.432499 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:28.928705 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:28.928728 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:28.928738 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:28.928746 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:28.931264 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:28.931290 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:28.931299 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:28.931305 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:28 GMT
	I1114 14:04:28.931311 1255771 round_trippers.go:580]     Audit-Id: b40671fa-9348-4e36-825c-23eba20123e2
	I1114 14:04:28.931324 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:28.931333 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:28.931339 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:28.931450 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:29.428577 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:29.428605 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:29.428615 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:29.428628 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:29.431017 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:29.431082 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:29.431121 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:29.431141 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:29 GMT
	I1114 14:04:29.431163 1255771 round_trippers.go:580]     Audit-Id: 8f1fe592-d884-413f-b985-5f6cb4c957bf
	I1114 14:04:29.431195 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:29.431218 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:29.431237 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:29.431747 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:29.929092 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:29.929136 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:29.929146 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:29.929153 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:29.931666 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:29.931736 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:29.931759 1255771 round_trippers.go:580]     Audit-Id: 4c26b7a6-9f5a-4bdf-90cf-f0b5e4c2d2ae
	I1114 14:04:29.931780 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:29.931813 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:29.931838 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:29.931861 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:29.931896 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:29 GMT
	I1114 14:04:29.932076 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:29.932556 1255771 node_ready.go:58] node "multinode-683928" has status "Ready":"False"
	I1114 14:04:30.429497 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:30.429523 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:30.429533 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:30.429541 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:30.432095 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:30.432164 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:30.432188 1255771 round_trippers.go:580]     Audit-Id: 0a526c4f-5c26-4b8f-8826-46204ff0a93e
	I1114 14:04:30.432234 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:30.432256 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:30.432278 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:30.432300 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:30.432329 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:30 GMT
	I1114 14:04:30.432526 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:30.928533 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:30.928566 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:30.928576 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:30.928584 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:30.932622 1255771 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 14:04:30.932729 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:30.932753 1255771 round_trippers.go:580]     Audit-Id: 8e3487e1-9964-4c1f-bd96-8dee3b1aec7c
	I1114 14:04:30.932766 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:30.932773 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:30.932780 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:30.932786 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:30.932793 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:30 GMT
	I1114 14:04:30.932901 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:31.429150 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:31.429175 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:31.429186 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:31.429194 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:31.431838 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:31.431914 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:31.431950 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:31.431978 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:31.432001 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:31.432039 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:31.432065 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:31 GMT
	I1114 14:04:31.432088 1255771 round_trippers.go:580]     Audit-Id: ebb9e4ae-0f88-4449-b808-dbceb3ef60d8
	I1114 14:04:31.432292 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:31.928696 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:31.928722 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:31.928732 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:31.928740 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:31.931222 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:31.931243 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:31.931252 1255771 round_trippers.go:580]     Audit-Id: 62435910-1373-4069-b09c-a0cb35041360
	I1114 14:04:31.931258 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:31.931265 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:31.931273 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:31.931279 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:31.931285 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:31 GMT
	I1114 14:04:31.931517 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:32.428567 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:32.428590 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:32.428600 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:32.428608 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:32.431127 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:32.431154 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:32.431164 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:32.431170 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:32.431177 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:32.431184 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:32 GMT
	I1114 14:04:32.431190 1255771 round_trippers.go:580]     Audit-Id: fe25c2ef-2510-4da2-a3cf-25e0c2b0fb36
	I1114 14:04:32.431196 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:32.431326 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:32.431779 1255771 node_ready.go:58] node "multinode-683928" has status "Ready":"False"
	I1114 14:04:32.929522 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:32.929549 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:32.929559 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:32.929567 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:32.932079 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:32.932107 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:32.932115 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:32.932122 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:32.932129 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:32.932135 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:32 GMT
	I1114 14:04:32.932141 1255771 round_trippers.go:580]     Audit-Id: a49f3c4c-34ce-475b-89bc-aad2a6d5b4a3
	I1114 14:04:32.932148 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:32.932410 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:33.429206 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:33.429230 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:33.429240 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:33.429247 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:33.431673 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:33.431700 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:33.431710 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:33.431717 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:33.431724 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:33.431730 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:33 GMT
	I1114 14:04:33.431737 1255771 round_trippers.go:580]     Audit-Id: 2f7532d2-e774-4126-b8c3-1d3a25b0a1a9
	I1114 14:04:33.431746 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:33.432085 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:33.929200 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:33.929232 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:33.929242 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:33.929249 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:33.931740 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:33.931760 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:33.931768 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:33.931775 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:33.931782 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:33.931788 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:33 GMT
	I1114 14:04:33.931794 1255771 round_trippers.go:580]     Audit-Id: ea573b1a-026e-4637-a997-1274259aef83
	I1114 14:04:33.931800 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:33.932006 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:34.429188 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:34.429210 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:34.429220 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:34.429227 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:34.431794 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:34.431817 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:34.431825 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:34.431832 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:34.431838 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:34.431844 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:34.431851 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:34 GMT
	I1114 14:04:34.431857 1255771 round_trippers.go:580]     Audit-Id: ba660be5-93a7-4374-821e-6cd641b29b48
	I1114 14:04:34.431989 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:34.432382 1255771 node_ready.go:58] node "multinode-683928" has status "Ready":"False"
	I1114 14:04:34.929125 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:34.929148 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:34.929159 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:34.929166 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:34.931756 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:34.931779 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:34.931788 1255771 round_trippers.go:580]     Audit-Id: c8ac09cf-d137-4820-ae7b-424462531fa3
	I1114 14:04:34.931794 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:34.931800 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:34.931807 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:34.931814 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:34.931825 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:34 GMT
	I1114 14:04:34.931925 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:35.428520 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:35.428565 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:35.428576 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:35.428583 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:35.431083 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:35.431111 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:35.431120 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:35.431127 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:35.431133 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:35.431139 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:35 GMT
	I1114 14:04:35.431146 1255771 round_trippers.go:580]     Audit-Id: 4fbbcbd7-9182-4668-9e16-f61fe510e2b4
	I1114 14:04:35.431153 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:35.431279 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:35.929410 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:35.929433 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:35.929444 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:35.929452 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:35.931920 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:35.931942 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:35.931950 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:35.931957 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:35.931963 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:35 GMT
	I1114 14:04:35.931970 1255771 round_trippers.go:580]     Audit-Id: ba949150-9bc9-491b-9470-fa95de6428bc
	I1114 14:04:35.931976 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:35.931982 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:35.932093 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:36.429201 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:36.429227 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:36.429238 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:36.429245 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:36.431856 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:36.431882 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:36.431891 1255771 round_trippers.go:580]     Audit-Id: 5dbd854b-b1d1-43c2-8d27-772d29ca76ca
	I1114 14:04:36.431897 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:36.431904 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:36.431910 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:36.431917 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:36.431926 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:36 GMT
	I1114 14:04:36.432060 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:36.432472 1255771 node_ready.go:58] node "multinode-683928" has status "Ready":"False"
	I1114 14:04:36.929229 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:36.929251 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:36.929261 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:36.929269 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:36.931779 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:36.931806 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:36.931815 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:36.931822 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:36.931829 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:36 GMT
	I1114 14:04:36.931835 1255771 round_trippers.go:580]     Audit-Id: 683d13ef-ec5e-4a5c-a2fc-d212d1c9b944
	I1114 14:04:36.931842 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:36.931853 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:36.931966 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:37.429163 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:37.429184 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:37.429194 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:37.429202 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:37.431759 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:37.431790 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:37.431799 1255771 round_trippers.go:580]     Audit-Id: bc639685-2625-4e85-9e7c-6f966510e8dd
	I1114 14:04:37.431806 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:37.431812 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:37.431818 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:37.431824 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:37.431831 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:37 GMT
	I1114 14:04:37.431951 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:37.929123 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:37.929146 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:37.929156 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:37.929163 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:37.931728 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:37.931756 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:37.931765 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:37 GMT
	I1114 14:04:37.931771 1255771 round_trippers.go:580]     Audit-Id: 8a3f8e25-725e-4700-8ea3-8238b2cb7acf
	I1114 14:04:37.931777 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:37.931783 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:37.931790 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:37.931796 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:37.932070 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:38.429194 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:38.429217 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:38.429226 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:38.429233 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:38.431744 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:38.431769 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:38.431778 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:38.431785 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:38 GMT
	I1114 14:04:38.431793 1255771 round_trippers.go:580]     Audit-Id: accb95b6-1063-4f23-a8d2-7c206459d0bc
	I1114 14:04:38.431799 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:38.431805 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:38.431811 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:38.431951 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:38.928710 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:38.928730 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:38.928740 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:38.928748 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:38.931357 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:38.931385 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:38.931393 1255771 round_trippers.go:580]     Audit-Id: 1f38e042-1fea-4302-a9a1-bcdbdd8f01b2
	I1114 14:04:38.931400 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:38.931406 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:38.931412 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:38.931418 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:38.931425 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:38 GMT
	I1114 14:04:38.931536 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:38.931959 1255771 node_ready.go:58] node "multinode-683928" has status "Ready":"False"
	I1114 14:04:39.428565 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:39.428588 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:39.428598 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:39.428605 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:39.431074 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:39.431104 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:39.431114 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:39.431121 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:39 GMT
	I1114 14:04:39.431128 1255771 round_trippers.go:580]     Audit-Id: 1a01cb6e-c73d-4994-978d-70fddb2e89a5
	I1114 14:04:39.431135 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:39.431141 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:39.431150 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:39.431275 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:39.929458 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:39.929485 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:39.929494 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:39.929502 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:39.932293 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:39.932318 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:39.932327 1255771 round_trippers.go:580]     Audit-Id: 0feb0c10-8174-48f2-aa2a-f998e6856315
	I1114 14:04:39.932334 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:39.932340 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:39.932346 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:39.932352 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:39.932359 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:39 GMT
	I1114 14:04:39.932463 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:40.428697 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:40.428720 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:40.428730 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:40.428738 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:40.431210 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:40.431231 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:40.431239 1255771 round_trippers.go:580]     Audit-Id: ee5a5e07-98a6-489d-9cf9-0d7048076297
	I1114 14:04:40.431246 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:40.431252 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:40.431258 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:40.431265 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:40.431271 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:40 GMT
	I1114 14:04:40.431476 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:40.929188 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:40.929240 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:40.929258 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:40.929273 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:40.931775 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:40.931802 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:40.931811 1255771 round_trippers.go:580]     Audit-Id: 62f18ec1-8b28-4fe7-8589-04ea5aca7253
	I1114 14:04:40.931818 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:40.931824 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:40.931831 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:40.931837 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:40.931844 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:40 GMT
	I1114 14:04:40.932028 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:40.932435 1255771 node_ready.go:58] node "multinode-683928" has status "Ready":"False"
	I1114 14:04:41.429221 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:41.429246 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:41.429256 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:41.429263 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:41.431724 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:41.431745 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:41.431753 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:41.431760 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:41.431767 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:41 GMT
	I1114 14:04:41.431773 1255771 round_trippers.go:580]     Audit-Id: b3eef3ae-9c5c-45fe-9ca8-181bd03cc40c
	I1114 14:04:41.431779 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:41.431785 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:41.431935 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:41.928902 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:41.928926 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:41.928936 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:41.928943 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:41.931639 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:41.931663 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:41.931672 1255771 round_trippers.go:580]     Audit-Id: 382a4883-42c5-45c9-b937-c528820e030c
	I1114 14:04:41.931688 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:41.931694 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:41.931700 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:41.931707 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:41.931713 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:41 GMT
	I1114 14:04:41.931849 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:42.428574 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:42.428599 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:42.428610 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:42.428618 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:42.431123 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:42.431145 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:42.431153 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:42.431160 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:42.431166 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:42.431172 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:42.431179 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:42 GMT
	I1114 14:04:42.431185 1255771 round_trippers.go:580]     Audit-Id: 5618adc7-4dd6-4bb9-870c-2525884e6c1d
	I1114 14:04:42.431295 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:42.929421 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:42.929444 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:42.929455 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:42.929462 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:42.931872 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:42.931899 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:42.931908 1255771 round_trippers.go:580]     Audit-Id: 3684079b-2e2c-4029-852f-2236f7b61033
	I1114 14:04:42.931915 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:42.931921 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:42.931927 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:42.931933 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:42.931941 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:42 GMT
	I1114 14:04:42.932136 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:42.932538 1255771 node_ready.go:58] node "multinode-683928" has status "Ready":"False"
	I1114 14:04:43.429478 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:43.429515 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:43.429526 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:43.429533 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:43.431992 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:43.432018 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:43.432030 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:43.432037 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:43.432044 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:43 GMT
	I1114 14:04:43.432050 1255771 round_trippers.go:580]     Audit-Id: b5ba17d5-8574-4fc0-925d-178a3d88f546
	I1114 14:04:43.432056 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:43.432062 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:43.432218 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:43.929320 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:43.929346 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:43.929356 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:43.929363 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:43.931955 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:43.931981 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:43.931990 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:43.931996 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:43 GMT
	I1114 14:04:43.932003 1255771 round_trippers.go:580]     Audit-Id: 685d0d2e-33d5-4ef7-8416-d63190745460
	I1114 14:04:43.932009 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:43.932015 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:43.932026 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:43.932308 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:44.428702 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:44.428726 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:44.428736 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:44.428744 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:44.431191 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:44.431212 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:44.431220 1255771 round_trippers.go:580]     Audit-Id: 3a3ad596-8d86-49c3-b7ca-7ad8993fa7ab
	I1114 14:04:44.431227 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:44.431233 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:44.431239 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:44.431245 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:44.431252 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:44 GMT
	I1114 14:04:44.431362 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:44.928610 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:44.928636 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:44.928647 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:44.928654 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:44.931226 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:44.931253 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:44.931262 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:44.931269 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:44.931275 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:44.931281 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:44.931288 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:44 GMT
	I1114 14:04:44.931295 1255771 round_trippers.go:580]     Audit-Id: 684dc721-c6a3-41da-94ec-f36acb596f71
	I1114 14:04:44.931594 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:45.429296 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:45.429322 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:45.429333 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:45.429340 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:45.431730 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:45.431768 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:45.431776 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:45.431783 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:45.431789 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:45.431795 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:45 GMT
	I1114 14:04:45.431802 1255771 round_trippers.go:580]     Audit-Id: f9752a52-e3fb-43ad-b89e-e97962849177
	I1114 14:04:45.431808 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:45.432154 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:45.432582 1255771 node_ready.go:58] node "multinode-683928" has status "Ready":"False"
	I1114 14:04:45.929192 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:45.929215 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:45.929226 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:45.929233 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:45.931786 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:45.931814 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:45.931823 1255771 round_trippers.go:580]     Audit-Id: b7648f3d-2bf8-4b2d-975b-76aa393f6b9c
	I1114 14:04:45.931830 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:45.931836 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:45.931843 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:45.931849 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:45.931857 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:45 GMT
	I1114 14:04:45.932143 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:46.429325 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:46.429350 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:46.429361 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:46.429368 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:46.431959 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:46.431982 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:46.431991 1255771 round_trippers.go:580]     Audit-Id: 71427753-e854-4e98-9adb-a11817ee33d2
	I1114 14:04:46.431998 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:46.432004 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:46.432010 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:46.432016 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:46.432023 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:46 GMT
	I1114 14:04:46.432164 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:46.929280 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:46.929308 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:46.929317 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:46.929325 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:46.931897 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:46.931923 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:46.931932 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:46.931938 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:46 GMT
	I1114 14:04:46.931945 1255771 round_trippers.go:580]     Audit-Id: a8f338fb-3868-49d0-98bc-f5520aff8a26
	I1114 14:04:46.931951 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:46.931958 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:46.931964 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:46.932099 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:47.429281 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:47.429305 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:47.429321 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:47.429329 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:47.431711 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:47.431735 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:47.431752 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:47.431759 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:47.431765 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:47.431771 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:47.431781 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:47 GMT
	I1114 14:04:47.431788 1255771 round_trippers.go:580]     Audit-Id: 7bbc0fd2-f095-4be9-9360-ff5fc24324eb
	I1114 14:04:47.431927 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:47.929024 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:47.929052 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:47.929063 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:47.929071 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:47.931490 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:47.931515 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:47.931524 1255771 round_trippers.go:580]     Audit-Id: 1fe8679c-a3ed-45b0-80c0-5641182b6408
	I1114 14:04:47.931530 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:47.931537 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:47.931543 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:47.931549 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:47.931556 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:47 GMT
	I1114 14:04:47.931732 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:47.932164 1255771 node_ready.go:58] node "multinode-683928" has status "Ready":"False"
	I1114 14:04:48.428867 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:48.428889 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:48.428905 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:48.428912 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:48.431382 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:48.431403 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:48.431411 1255771 round_trippers.go:580]     Audit-Id: 1edd71cd-b6fc-4188-934f-ee0932bef6fc
	I1114 14:04:48.431417 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:48.431423 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:48.431429 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:48.431436 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:48.431442 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:48 GMT
	I1114 14:04:48.431574 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:48.928606 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:48.928631 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:48.928641 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:48.928648 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:48.931403 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:48.931434 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:48.931442 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:48.931450 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:48 GMT
	I1114 14:04:48.931456 1255771 round_trippers.go:580]     Audit-Id: f07ad40c-9fc8-4ffb-9c47-b7bbf6c28cc1
	I1114 14:04:48.931462 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:48.931468 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:48.931475 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:48.931586 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:49.428599 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:49.428622 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:49.428631 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:49.428639 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:49.431143 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:49.431170 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:49.431179 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:49.431185 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:49.431191 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:49.431198 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:49 GMT
	I1114 14:04:49.431204 1255771 round_trippers.go:580]     Audit-Id: 6e5ff4c4-3eac-4a24-8351-88a25d968926
	I1114 14:04:49.431210 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:49.431518 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:49.929298 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:49.929319 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:49.929329 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:49.929336 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:49.931838 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:49.931865 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:49.931874 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:49.931880 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:49.931887 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:49 GMT
	I1114 14:04:49.931893 1255771 round_trippers.go:580]     Audit-Id: 7caf72af-9b5f-427b-8bba-15a95b1d0c03
	I1114 14:04:49.931899 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:49.931905 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:49.932140 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:49.932573 1255771 node_ready.go:58] node "multinode-683928" has status "Ready":"False"
	I1114 14:04:50.429373 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:50.429398 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:50.429408 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:50.429416 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:50.431900 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:50.431921 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:50.431930 1255771 round_trippers.go:580]     Audit-Id: 72d63bbc-f247-4399-abe9-1fabe0fe2029
	I1114 14:04:50.431936 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:50.431942 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:50.431948 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:50.431954 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:50.431960 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:50 GMT
	I1114 14:04:50.432116 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:50.929220 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:50.929245 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:50.929255 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:50.929263 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:50.931751 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:50.931780 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:50.931790 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:50.931796 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:50.931803 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:50.931811 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:50.931818 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:50 GMT
	I1114 14:04:50.931825 1255771 round_trippers.go:580]     Audit-Id: 2d8cd914-a764-4400-aade-f3717eb3ff24
	I1114 14:04:50.931936 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:51.429014 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:51.429043 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:51.429053 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:51.429061 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:51.431467 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:51.431489 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:51.431497 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:51.431503 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:51.431509 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:51.431515 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:51 GMT
	I1114 14:04:51.431522 1255771 round_trippers.go:580]     Audit-Id: 6340bfcc-873f-48a3-b116-ed20403f3629
	I1114 14:04:51.431527 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:51.431676 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:51.928931 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:51.928957 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:51.928968 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:51.928976 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:51.931584 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:51.931610 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:51.931625 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:51.931632 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:51.931639 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:51.931645 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:51.931652 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:51 GMT
	I1114 14:04:51.931658 1255771 round_trippers.go:580]     Audit-Id: d4611273-93a2-4b07-a100-10b26c502415
	I1114 14:04:51.931746 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:52.429405 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:52.429428 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:52.429438 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:52.429445 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:52.431979 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:52.432007 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:52.432016 1255771 round_trippers.go:580]     Audit-Id: 335174f6-759d-4f08-b489-778175cef0ed
	I1114 14:04:52.432023 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:52.432029 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:52.432035 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:52.432042 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:52.432049 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:52 GMT
	I1114 14:04:52.432177 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:52.432623 1255771 node_ready.go:58] node "multinode-683928" has status "Ready":"False"
	I1114 14:04:52.929482 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:52.929503 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:52.929513 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:52.929521 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:52.932001 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:52.932028 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:52.932037 1255771 round_trippers.go:580]     Audit-Id: 819f6d8b-6a9b-458d-83a4-d8208e9225ac
	I1114 14:04:52.932044 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:52.932050 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:52.932056 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:52.932063 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:52.932069 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:52 GMT
	I1114 14:04:52.932184 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:53.429531 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:53.429555 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:53.429571 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:53.429578 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:53.432095 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:53.432120 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:53.432130 1255771 round_trippers.go:580]     Audit-Id: 407d00fc-be82-463b-8f9b-494011ef0903
	I1114 14:04:53.432136 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:53.432143 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:53.432151 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:53.432157 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:53.432163 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:53 GMT
	I1114 14:04:53.432296 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"336","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6223 chars]
	I1114 14:04:53.928687 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:53.928708 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:53.928718 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:53.928725 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:53.947960 1255771 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I1114 14:04:53.947987 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:53.947997 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:53.948004 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:53.948010 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:53.948018 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:53.948024 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:53 GMT
	I1114 14:04:53.948030 1255771 round_trippers.go:580]     Audit-Id: a266b15c-2605-48b4-98e2-304c418cb167
	I1114 14:04:53.948401 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"401","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1114 14:04:53.948840 1255771 node_ready.go:49] node "multinode-683928" has status "Ready":"True"
	I1114 14:04:53.948854 1255771 node_ready.go:38] duration metric: took 30.583772831s waiting for node "multinode-683928" to be "Ready" ...
	I1114 14:04:53.948864 1255771 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 14:04:53.948948 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1114 14:04:53.948954 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:53.948962 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:53.948969 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:53.955434 1255771 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1114 14:04:53.955456 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:53.955465 1255771 round_trippers.go:580]     Audit-Id: 5fd82a9c-c103-4e18-ba2d-300164bec671
	I1114 14:04:53.955472 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:53.955479 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:53.955486 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:53.955491 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:53.955498 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:53 GMT
	I1114 14:04:53.957016 1255771 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"401"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wxp87","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"51c2bd2a-c15e-4489-ad3b-7ca65e4ec898","resourceVersion":"348","creationTimestamp":"2023-11-14T14:04:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81b2172f-b4bf-4215-976d-efff1994decb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:04:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81b2172f-b4bf-4215-976d-efff1994decb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52942 chars]
	I1114 14:04:53.961233 1255771 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wxp87" in "kube-system" namespace to be "Ready" ...
	I1114 14:04:53.961349 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wxp87
	I1114 14:04:53.961364 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:53.961374 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:53.961400 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:53.973343 1255771 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1114 14:04:53.973368 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:53.973377 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:53 GMT
	I1114 14:04:53.973384 1255771 round_trippers.go:580]     Audit-Id: 4e24c121-d519-4036-9c51-4a700bb3694c
	I1114 14:04:53.973391 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:53.973397 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:53.973403 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:53.973414 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:53.973577 1255771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wxp87","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"51c2bd2a-c15e-4489-ad3b-7ca65e4ec898","resourceVersion":"402","creationTimestamp":"2023-11-14T14:04:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81b2172f-b4bf-4215-976d-efff1994decb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:04:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81b2172f-b4bf-4215-976d-efff1994decb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 4762 chars]
	I1114 14:04:53.974053 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:53.974070 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:53.974079 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:53.974086 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:53.986393 1255771 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1114 14:04:53.986419 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:53.986428 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:53.986435 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:53.986441 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:53.986453 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:53 GMT
	I1114 14:04:53.986462 1255771 round_trippers.go:580]     Audit-Id: ce8a7d5b-1b9a-4c9e-aac2-88ac62f02bb8
	I1114 14:04:53.986468 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:53.986935 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"401","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1114 14:04:53.987477 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wxp87
	I1114 14:04:53.987498 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:53.987508 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:53.987516 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:53.990741 1255771 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:04:53.990765 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:53.990774 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:53.990780 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:53.990786 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:53.990793 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:53 GMT
	I1114 14:04:53.990803 1255771 round_trippers.go:580]     Audit-Id: 692b2f3c-89e4-48c8-93bf-9a947d54b579
	I1114 14:04:53.990815 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:53.990946 1255771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wxp87","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"51c2bd2a-c15e-4489-ad3b-7ca65e4ec898","resourceVersion":"404","creationTimestamp":"2023-11-14T14:04:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81b2172f-b4bf-4215-976d-efff1994decb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:04:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81b2172f-b4bf-4215-976d-efff1994decb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1114 14:04:53.991487 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:53.991506 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:53.991515 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:53.991522 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:53.994769 1255771 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:04:53.994790 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:53.994798 1255771 round_trippers.go:580]     Audit-Id: f876827f-71a3-405e-94cc-11aa1c0dd46e
	I1114 14:04:53.994805 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:53.994811 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:53.994817 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:53.994824 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:53.994833 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:53 GMT
	I1114 14:04:53.995347 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"401","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1114 14:04:54.496530 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wxp87
	I1114 14:04:54.496585 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:54.496595 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:54.496602 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:54.500922 1255771 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 14:04:54.500948 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:54.500957 1255771 round_trippers.go:580]     Audit-Id: c275a5df-ffe6-426f-8b53-368163686fce
	I1114 14:04:54.500963 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:54.500969 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:54.500976 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:54.500989 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:54.500997 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:54 GMT
	I1114 14:04:54.501584 1255771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wxp87","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"51c2bd2a-c15e-4489-ad3b-7ca65e4ec898","resourceVersion":"404","creationTimestamp":"2023-11-14T14:04:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81b2172f-b4bf-4215-976d-efff1994decb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:04:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81b2172f-b4bf-4215-976d-efff1994decb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1114 14:04:54.502129 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:54.502149 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:54.502158 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:54.502165 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:54.510663 1255771 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1114 14:04:54.510693 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:54.510701 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:54.510708 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:54.510714 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:54.510721 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:54.510728 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:54 GMT
	I1114 14:04:54.510735 1255771 round_trippers.go:580]     Audit-Id: 361206a5-2dda-4c97-8e65-c165ab561d2a
	I1114 14:04:54.511123 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"401","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1114 14:04:54.996409 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wxp87
	I1114 14:04:54.996508 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:54.996537 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:54.996585 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:54.999798 1255771 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:04:54.999824 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:54.999834 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:54.999841 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:54.999847 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:54.999853 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:54.999860 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:54 GMT
	I1114 14:04:54.999866 1255771 round_trippers.go:580]     Audit-Id: 4321e21c-ccf1-4371-9389-8d9bb5ab21a0
	I1114 14:04:55.000148 1255771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wxp87","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"51c2bd2a-c15e-4489-ad3b-7ca65e4ec898","resourceVersion":"404","creationTimestamp":"2023-11-14T14:04:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81b2172f-b4bf-4215-976d-efff1994decb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:04:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81b2172f-b4bf-4215-976d-efff1994decb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I1114 14:04:55.000752 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:55.000768 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:55.000777 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:55.000784 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:55.005713 1255771 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 14:04:55.005745 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:55.005755 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:55.005762 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:55.005769 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:55.005798 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:55 GMT
	I1114 14:04:55.005811 1255771 round_trippers.go:580]     Audit-Id: e0968d53-8a61-4ce2-ba4b-2e55db3a2d58
	I1114 14:04:55.005819 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:55.006021 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"401","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1114 14:04:55.496021 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wxp87
	I1114 14:04:55.496045 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:55.496054 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:55.496062 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:55.498737 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:55.498766 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:55.498775 1255771 round_trippers.go:580]     Audit-Id: 231f1be7-746d-4b47-a20a-e8196cb38701
	I1114 14:04:55.498781 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:55.498795 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:55.498802 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:55.498808 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:55.498824 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:55 GMT
	I1114 14:04:55.499037 1255771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wxp87","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"51c2bd2a-c15e-4489-ad3b-7ca65e4ec898","resourceVersion":"417","creationTimestamp":"2023-11-14T14:04:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81b2172f-b4bf-4215-976d-efff1994decb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:04:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81b2172f-b4bf-4215-976d-efff1994decb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1114 14:04:55.499589 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:55.499606 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:55.499614 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:55.499628 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:55.501952 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:55.502016 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:55.502040 1255771 round_trippers.go:580]     Audit-Id: 62281c13-c7c4-4e08-a954-d3e3edb4ed5f
	I1114 14:04:55.502061 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:55.502097 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:55.502124 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:55.502147 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:55.502181 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:55 GMT
	I1114 14:04:55.502326 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"401","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1114 14:04:55.502748 1255771 pod_ready.go:92] pod "coredns-5dd5756b68-wxp87" in "kube-system" namespace has status "Ready":"True"
	I1114 14:04:55.502770 1255771 pod_ready.go:81] duration metric: took 1.541505642s waiting for pod "coredns-5dd5756b68-wxp87" in "kube-system" namespace to be "Ready" ...
	I1114 14:04:55.502781 1255771 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-683928" in "kube-system" namespace to be "Ready" ...
	I1114 14:04:55.502839 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-683928
	I1114 14:04:55.502848 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:55.502856 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:55.502863 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:55.505342 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:55.505410 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:55.505434 1255771 round_trippers.go:580]     Audit-Id: 892371a2-e795-4bc6-8349-2eea1b0503a2
	I1114 14:04:55.505455 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:55.505487 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:55.505512 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:55.505533 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:55.505568 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:55 GMT
	I1114 14:04:55.505770 1255771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-683928","namespace":"kube-system","uid":"b8abc8dc-45bf-4827-8e3e-3de67a0f0e45","resourceVersion":"389","creationTimestamp":"2023-11-14T14:04:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ec9bb177894ad2f6afba67be57938994","kubernetes.io/config.mirror":"ec9bb177894ad2f6afba67be57938994","kubernetes.io/config.seen":"2023-11-14T14:04:09.837680599Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:04:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1114 14:04:55.506266 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:55.506286 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:55.506295 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:55.506302 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:55.508853 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:55.508922 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:55.508944 1255771 round_trippers.go:580]     Audit-Id: 48de9563-e656-4b33-85a9-f7e063cfcb38
	I1114 14:04:55.508967 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:55.509002 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:55.509030 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:55.509050 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:55.509073 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:55 GMT
	I1114 14:04:55.509195 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"401","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1114 14:04:55.509630 1255771 pod_ready.go:92] pod "etcd-multinode-683928" in "kube-system" namespace has status "Ready":"True"
	I1114 14:04:55.509649 1255771 pod_ready.go:81] duration metric: took 6.860754ms waiting for pod "etcd-multinode-683928" in "kube-system" namespace to be "Ready" ...
	I1114 14:04:55.509663 1255771 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-683928" in "kube-system" namespace to be "Ready" ...
	I1114 14:04:55.509725 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-683928
	I1114 14:04:55.509733 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:55.509747 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:55.509754 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:55.512205 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:55.512250 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:55.512259 1255771 round_trippers.go:580]     Audit-Id: ff34b226-0af3-4aed-8ae5-e208e9b865e3
	I1114 14:04:55.512265 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:55.512271 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:55.512278 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:55.512284 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:55.512294 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:55 GMT
	I1114 14:04:55.512524 1255771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-683928","namespace":"kube-system","uid":"a4d6bf70-13a0-4603-8504-7497b58f5d76","resourceVersion":"390","creationTimestamp":"2023-11-14T14:04:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"f01db491b13fa9a24c600dd08fa8d46d","kubernetes.io/config.mirror":"f01db491b13fa9a24c600dd08fa8d46d","kubernetes.io/config.seen":"2023-11-14T14:04:09.837686146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:04:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1114 14:04:55.513076 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:55.513094 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:55.513104 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:55.513112 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:55.515378 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:55.515438 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:55.515452 1255771 round_trippers.go:580]     Audit-Id: 9897056a-742b-47ab-b0a9-378b43412439
	I1114 14:04:55.515462 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:55.515470 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:55.515477 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:55.515486 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:55.515502 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:55 GMT
	I1114 14:04:55.515683 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"401","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1114 14:04:55.516079 1255771 pod_ready.go:92] pod "kube-apiserver-multinode-683928" in "kube-system" namespace has status "Ready":"True"
	I1114 14:04:55.516095 1255771 pod_ready.go:81] duration metric: took 6.42453ms waiting for pod "kube-apiserver-multinode-683928" in "kube-system" namespace to be "Ready" ...
	I1114 14:04:55.516107 1255771 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-683928" in "kube-system" namespace to be "Ready" ...
	I1114 14:04:55.516169 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-683928
	I1114 14:04:55.516187 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:55.516196 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:55.516203 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:55.519004 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:55.519032 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:55.519041 1255771 round_trippers.go:580]     Audit-Id: cd84e98a-ff8a-498e-91e0-11a055cdfd65
	I1114 14:04:55.519047 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:55.519054 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:55.519060 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:55.519070 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:55.519081 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:55 GMT
	I1114 14:04:55.519304 1255771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-683928","namespace":"kube-system","uid":"fe4ca2c2-2dba-4c17-ac7b-a62caa16c5cb","resourceVersion":"391","creationTimestamp":"2023-11-14T14:04:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"992096f1db4796aad2da542d99dc8329","kubernetes.io/config.mirror":"992096f1db4796aad2da542d99dc8329","kubernetes.io/config.seen":"2023-11-14T14:04:09.837687631Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:04:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1114 14:04:55.529182 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:55.529223 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:55.529234 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:55.529242 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:55.531794 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:55.531824 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:55.531833 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:55.531840 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:55 GMT
	I1114 14:04:55.531847 1255771 round_trippers.go:580]     Audit-Id: 671bf8c5-91b8-4f03-8968-7c849779636d
	I1114 14:04:55.531853 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:55.531864 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:55.531870 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:55.532116 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"401","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1114 14:04:55.532581 1255771 pod_ready.go:92] pod "kube-controller-manager-multinode-683928" in "kube-system" namespace has status "Ready":"True"
	I1114 14:04:55.532622 1255771 pod_ready.go:81] duration metric: took 16.503428ms waiting for pod "kube-controller-manager-multinode-683928" in "kube-system" namespace to be "Ready" ...
	I1114 14:04:55.532645 1255771 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vcfc4" in "kube-system" namespace to be "Ready" ...
	I1114 14:04:55.729100 1255771 request.go:629] Waited for 196.390789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vcfc4
	I1114 14:04:55.729206 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vcfc4
	I1114 14:04:55.729246 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:55.729264 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:55.729273 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:55.731945 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:55.731970 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:55.731979 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:55.731986 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:55.731993 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:55.732000 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:55 GMT
	I1114 14:04:55.732006 1255771 round_trippers.go:580]     Audit-Id: 57d389d5-7e22-4e33-a50f-1b4568033c10
	I1114 14:04:55.732017 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:55.732232 1255771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vcfc4","generateName":"kube-proxy-","namespace":"kube-system","uid":"679e31a8-7e53-42d9-afd5-5b3b18854981","resourceVersion":"374","creationTimestamp":"2023-11-14T14:04:22Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"33f72355-c83b-465e-bd1f-56fb8e339b0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:04:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33f72355-c83b-465e-bd1f-56fb8e339b0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5509 chars]
	I1114 14:04:55.929126 1255771 request.go:629] Waited for 196.34567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:55.929208 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:55.929227 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:55.929236 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:55.929243 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:55.931986 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:55.932022 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:55.932031 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:55.932038 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:55.932044 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:55.932050 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:55 GMT
	I1114 14:04:55.932057 1255771 round_trippers.go:580]     Audit-Id: 90a39d75-bbea-4897-96f6-fc9286730e6c
	I1114 14:04:55.932070 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:55.932179 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"401","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1114 14:04:55.932679 1255771 pod_ready.go:92] pod "kube-proxy-vcfc4" in "kube-system" namespace has status "Ready":"True"
	I1114 14:04:55.932702 1255771 pod_ready.go:81] duration metric: took 400.049476ms waiting for pod "kube-proxy-vcfc4" in "kube-system" namespace to be "Ready" ...
	I1114 14:04:55.932715 1255771 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-683928" in "kube-system" namespace to be "Ready" ...
	I1114 14:04:56.129128 1255771 request.go:629] Waited for 196.342667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-683928
	I1114 14:04:56.129216 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-683928
	I1114 14:04:56.129225 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:56.129234 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:56.129241 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:56.132044 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:56.132145 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:56.132164 1255771 round_trippers.go:580]     Audit-Id: 50f089a0-e7eb-4cc5-8b5e-77bb75f6cacc
	I1114 14:04:56.132171 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:56.132179 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:56.132197 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:56.132208 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:56.132215 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:56 GMT
	I1114 14:04:56.132334 1255771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-683928","namespace":"kube-system","uid":"21e5e748-a68f-4769-9422-281cee1db8ac","resourceVersion":"388","creationTimestamp":"2023-11-14T14:04:10Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ab73247eb2483e7720424cdc15c19b03","kubernetes.io/config.mirror":"ab73247eb2483e7720424cdc15c19b03","kubernetes.io/config.seen":"2023-11-14T14:04:09.837688697Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:04:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1114 14:04:56.329156 1255771 request.go:629] Waited for 196.353932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:56.329223 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:04:56.329236 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:56.329245 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:56.329256 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:56.331728 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:56.331766 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:56.331774 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:56.331781 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:56.331787 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:56.331793 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:56 GMT
	I1114 14:04:56.331800 1255771 round_trippers.go:580]     Audit-Id: 66397110-baa1-4f06-84aa-fc8b5d1135c3
	I1114 14:04:56.331816 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:56.331922 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"401","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1114 14:04:56.332335 1255771 pod_ready.go:92] pod "kube-scheduler-multinode-683928" in "kube-system" namespace has status "Ready":"True"
	I1114 14:04:56.332355 1255771 pod_ready.go:81] duration metric: took 399.628244ms waiting for pod "kube-scheduler-multinode-683928" in "kube-system" namespace to be "Ready" ...
	I1114 14:04:56.332367 1255771 pod_ready.go:38] duration metric: took 2.383475998s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 14:04:56.332385 1255771 api_server.go:52] waiting for apiserver process to appear ...
	I1114 14:04:56.332442 1255771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:04:56.344112 1255771 command_runner.go:130] > 1230
	I1114 14:04:56.345491 1255771 api_server.go:72] duration metric: took 33.180463579s to wait for apiserver process to appear ...
	I1114 14:04:56.345513 1255771 api_server.go:88] waiting for apiserver healthz status ...
	I1114 14:04:56.345531 1255771 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1114 14:04:56.354531 1255771 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1114 14:04:56.354610 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I1114 14:04:56.354622 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:56.354631 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:56.354640 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:56.355822 1255771 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1114 14:04:56.355871 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:56.355901 1255771 round_trippers.go:580]     Audit-Id: cabd6567-6a9c-40f4-9610-f3f8921ce555
	I1114 14:04:56.355910 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:56.355916 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:56.355927 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:56.355942 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:56.355953 1255771 round_trippers.go:580]     Content-Length: 264
	I1114 14:04:56.355963 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:56 GMT
	I1114 14:04:56.355991 1255771 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.3",
	  "gitCommit": "a8a1abc25cad87333840cd7d54be2efaf31a3177",
	  "gitTreeState": "clean",
	  "buildDate": "2023-10-18T11:33:18Z",
	  "goVersion": "go1.20.10",
	  "compiler": "gc",
	  "platform": "linux/arm64"
	}
	I1114 14:04:56.356090 1255771 api_server.go:141] control plane version: v1.28.3
	I1114 14:04:56.356109 1255771 api_server.go:131] duration metric: took 10.589442ms to wait for apiserver health ...
	I1114 14:04:56.356119 1255771 system_pods.go:43] waiting for kube-system pods to appear ...
	I1114 14:04:56.529551 1255771 request.go:629] Waited for 173.341133ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1114 14:04:56.529613 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1114 14:04:56.529619 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:56.529628 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:56.529640 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:56.533126 1255771 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:04:56.533154 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:56.533164 1255771 round_trippers.go:580]     Audit-Id: 4127f8f6-1b92-4538-976b-ea47f0579999
	I1114 14:04:56.533171 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:56.533178 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:56.533185 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:56.533192 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:56.533198 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:56 GMT
	I1114 14:04:56.533707 1255771 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wxp87","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"51c2bd2a-c15e-4489-ad3b-7ca65e4ec898","resourceVersion":"417","creationTimestamp":"2023-11-14T14:04:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81b2172f-b4bf-4215-976d-efff1994decb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:04:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81b2172f-b4bf-4215-976d-efff1994decb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I1114 14:04:56.536173 1255771 system_pods.go:59] 8 kube-system pods found
	I1114 14:04:56.536204 1255771 system_pods.go:61] "coredns-5dd5756b68-wxp87" [51c2bd2a-c15e-4489-ad3b-7ca65e4ec898] Running
	I1114 14:04:56.536210 1255771 system_pods.go:61] "etcd-multinode-683928" [b8abc8dc-45bf-4827-8e3e-3de67a0f0e45] Running
	I1114 14:04:56.536215 1255771 system_pods.go:61] "kindnet-sgvbn" [7c963530-9d71-4472-afb4-b6a45c1b8186] Running
	I1114 14:04:56.536220 1255771 system_pods.go:61] "kube-apiserver-multinode-683928" [a4d6bf70-13a0-4603-8504-7497b58f5d76] Running
	I1114 14:04:56.536225 1255771 system_pods.go:61] "kube-controller-manager-multinode-683928" [fe4ca2c2-2dba-4c17-ac7b-a62caa16c5cb] Running
	I1114 14:04:56.536230 1255771 system_pods.go:61] "kube-proxy-vcfc4" [679e31a8-7e53-42d9-afd5-5b3b18854981] Running
	I1114 14:04:56.536235 1255771 system_pods.go:61] "kube-scheduler-multinode-683928" [21e5e748-a68f-4769-9422-281cee1db8ac] Running
	I1114 14:04:56.536240 1255771 system_pods.go:61] "storage-provisioner" [5444133d-cc06-4053-afe8-529d67cee17e] Running
	I1114 14:04:56.536245 1255771 system_pods.go:74] duration metric: took 180.116824ms to wait for pod list to return data ...
	I1114 14:04:56.536259 1255771 default_sa.go:34] waiting for default service account to be created ...
	I1114 14:04:56.729685 1255771 request.go:629] Waited for 193.346658ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1114 14:04:56.729757 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I1114 14:04:56.729770 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:56.729780 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:56.729787 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:56.732622 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:56.732646 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:56.732658 1255771 round_trippers.go:580]     Content-Length: 261
	I1114 14:04:56.732665 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:56 GMT
	I1114 14:04:56.732694 1255771 round_trippers.go:580]     Audit-Id: b115f968-f2e9-416b-8948-123be6a989ad
	I1114 14:04:56.732707 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:56.732714 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:56.732724 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:56.732738 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:56.732769 1255771 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b5d9b9b8-ab7b-41ca-a02e-edc6212eacfa","resourceVersion":"339","creationTimestamp":"2023-11-14T14:04:22Z"}}]}
	I1114 14:04:56.732973 1255771 default_sa.go:45] found service account: "default"
	I1114 14:04:56.733004 1255771 default_sa.go:55] duration metric: took 196.735304ms for default service account to be created ...
	I1114 14:04:56.733013 1255771 system_pods.go:116] waiting for k8s-apps to be running ...
	I1114 14:04:56.929430 1255771 request.go:629] Waited for 196.340033ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1114 14:04:56.929507 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1114 14:04:56.929513 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:56.929522 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:56.929534 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:56.933266 1255771 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1114 14:04:56.933295 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:56.933305 1255771 round_trippers.go:580]     Audit-Id: 3bd32aac-cb7d-4f23-851f-bcd30de675df
	I1114 14:04:56.933312 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:56.933318 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:56.933325 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:56.933332 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:56.933339 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:56 GMT
	I1114 14:04:56.934000 1255771 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wxp87","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"51c2bd2a-c15e-4489-ad3b-7ca65e4ec898","resourceVersion":"417","creationTimestamp":"2023-11-14T14:04:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81b2172f-b4bf-4215-976d-efff1994decb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:04:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81b2172f-b4bf-4215-976d-efff1994decb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55611 chars]
	I1114 14:04:56.936463 1255771 system_pods.go:86] 8 kube-system pods found
	I1114 14:04:56.936492 1255771 system_pods.go:89] "coredns-5dd5756b68-wxp87" [51c2bd2a-c15e-4489-ad3b-7ca65e4ec898] Running
	I1114 14:04:56.936499 1255771 system_pods.go:89] "etcd-multinode-683928" [b8abc8dc-45bf-4827-8e3e-3de67a0f0e45] Running
	I1114 14:04:56.936504 1255771 system_pods.go:89] "kindnet-sgvbn" [7c963530-9d71-4472-afb4-b6a45c1b8186] Running
	I1114 14:04:56.936509 1255771 system_pods.go:89] "kube-apiserver-multinode-683928" [a4d6bf70-13a0-4603-8504-7497b58f5d76] Running
	I1114 14:04:56.936515 1255771 system_pods.go:89] "kube-controller-manager-multinode-683928" [fe4ca2c2-2dba-4c17-ac7b-a62caa16c5cb] Running
	I1114 14:04:56.936521 1255771 system_pods.go:89] "kube-proxy-vcfc4" [679e31a8-7e53-42d9-afd5-5b3b18854981] Running
	I1114 14:04:56.936534 1255771 system_pods.go:89] "kube-scheduler-multinode-683928" [21e5e748-a68f-4769-9422-281cee1db8ac] Running
	I1114 14:04:56.936539 1255771 system_pods.go:89] "storage-provisioner" [5444133d-cc06-4053-afe8-529d67cee17e] Running
	I1114 14:04:56.936575 1255771 system_pods.go:126] duration metric: took 203.547524ms to wait for k8s-apps to be running ...
	I1114 14:04:56.936583 1255771 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 14:04:56.936646 1255771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:04:56.950716 1255771 system_svc.go:56] duration metric: took 14.122259ms WaitForService to wait for kubelet.
	I1114 14:04:56.950747 1255771 kubeadm.go:581] duration metric: took 33.785726521s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 14:04:56.950768 1255771 node_conditions.go:102] verifying NodePressure condition ...
	I1114 14:04:57.129166 1255771 request.go:629] Waited for 178.328208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1114 14:04:57.129262 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1114 14:04:57.129274 1255771 round_trippers.go:469] Request Headers:
	I1114 14:04:57.129284 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:04:57.129291 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:04:57.131778 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:04:57.131800 1255771 round_trippers.go:577] Response Headers:
	I1114 14:04:57.131808 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:04:57 GMT
	I1114 14:04:57.131816 1255771 round_trippers.go:580]     Audit-Id: 523a2537-381a-4dab-aed7-911e1b4ed82d
	I1114 14:04:57.131822 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:04:57.131828 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:04:57.131834 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:04:57.131840 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:04:57.131991 1255771 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"401","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6082 chars]
	I1114 14:04:57.132455 1255771 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1114 14:04:57.132475 1255771 node_conditions.go:123] node cpu capacity is 2
	I1114 14:04:57.132485 1255771 node_conditions.go:105] duration metric: took 181.712382ms to run NodePressure ...
	I1114 14:04:57.132496 1255771 start.go:228] waiting for startup goroutines ...
	I1114 14:04:57.132503 1255771 start.go:233] waiting for cluster config update ...
	I1114 14:04:57.132513 1255771 start.go:242] writing updated cluster config ...
	I1114 14:04:57.134998 1255771 out.go:177] 
	I1114 14:04:57.136950 1255771 config.go:182] Loaded profile config "multinode-683928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 14:04:57.137048 1255771 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/config.json ...
	I1114 14:04:57.139043 1255771 out.go:177] * Starting worker node multinode-683928-m02 in cluster multinode-683928
	I1114 14:04:57.140712 1255771 cache.go:121] Beginning downloading kic base image for docker with crio
	I1114 14:04:57.142641 1255771 out.go:177] * Pulling base image ...
	I1114 14:04:57.144699 1255771 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 14:04:57.144739 1255771 cache.go:56] Caching tarball of preloaded images
	I1114 14:04:57.144780 1255771 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon
	I1114 14:04:57.144853 1255771 preload.go:174] Found /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1114 14:04:57.144870 1255771 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1114 14:04:57.144975 1255771 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/config.json ...
	I1114 14:04:57.162642 1255771 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon, skipping pull
	I1114 14:04:57.162668 1255771 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 exists in daemon, skipping load
	I1114 14:04:57.162687 1255771 cache.go:194] Successfully downloaded all kic artifacts
	I1114 14:04:57.162728 1255771 start.go:365] acquiring machines lock for multinode-683928-m02: {Name:mkc3ea8e3323f207350ef65d18d21c5940181fd0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:04:57.162842 1255771 start.go:369] acquired machines lock for "multinode-683928-m02" in 96.861µs
	I1114 14:04:57.162868 1255771 start.go:93] Provisioning new machine with config: &{Name:multinode-683928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-683928 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1114 14:04:57.162944 1255771 start.go:125] createHost starting for "m02" (driver="docker")
	I1114 14:04:57.165190 1255771 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1114 14:04:57.165322 1255771 start.go:159] libmachine.API.Create for "multinode-683928" (driver="docker")
	I1114 14:04:57.165349 1255771 client.go:168] LocalClient.Create starting
	I1114 14:04:57.165429 1255771 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem
	I1114 14:04:57.165460 1255771 main.go:141] libmachine: Decoding PEM data...
	I1114 14:04:57.165475 1255771 main.go:141] libmachine: Parsing certificate...
	I1114 14:04:57.165529 1255771 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem
	I1114 14:04:57.165545 1255771 main.go:141] libmachine: Decoding PEM data...
	I1114 14:04:57.165556 1255771 main.go:141] libmachine: Parsing certificate...
	I1114 14:04:57.165797 1255771 cli_runner.go:164] Run: docker network inspect multinode-683928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1114 14:04:57.183791 1255771 network_create.go:77] Found existing network {name:multinode-683928 subnet:0x40033bbc50 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I1114 14:04:57.183839 1255771 kic.go:121] calculated static IP "192.168.58.3" for the "multinode-683928-m02" container
	I1114 14:04:57.183917 1255771 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1114 14:04:57.209727 1255771 cli_runner.go:164] Run: docker volume create multinode-683928-m02 --label name.minikube.sigs.k8s.io=multinode-683928-m02 --label created_by.minikube.sigs.k8s.io=true
	I1114 14:04:57.229219 1255771 oci.go:103] Successfully created a docker volume multinode-683928-m02
	I1114 14:04:57.229319 1255771 cli_runner.go:164] Run: docker run --rm --name multinode-683928-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-683928-m02 --entrypoint /usr/bin/test -v multinode-683928-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -d /var/lib
	I1114 14:04:57.813250 1255771 oci.go:107] Successfully prepared a docker volume multinode-683928-m02
	I1114 14:04:57.813295 1255771 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 14:04:57.813315 1255771 kic.go:194] Starting extracting preloaded images to volume ...
	I1114 14:04:57.813417 1255771 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-683928-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir
	I1114 14:05:02.238688 1255771 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v multinode-683928-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 -I lz4 -xf /preloaded.tar -C /extractDir: (4.42522565s)
	I1114 14:05:02.238717 1255771 kic.go:203] duration metric: took 4.425399 seconds to extract preloaded images to volume
	W1114 14:05:02.238863 1255771 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1114 14:05:02.238985 1255771 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1114 14:05:02.308754 1255771 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-683928-m02 --name multinode-683928-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-683928-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-683928-m02 --network multinode-683928 --ip 192.168.58.3 --volume multinode-683928-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24
	I1114 14:05:02.681999 1255771 cli_runner.go:164] Run: docker container inspect multinode-683928-m02 --format={{.State.Running}}
	I1114 14:05:02.711055 1255771 cli_runner.go:164] Run: docker container inspect multinode-683928-m02 --format={{.State.Status}}
	I1114 14:05:02.743011 1255771 cli_runner.go:164] Run: docker exec multinode-683928-m02 stat /var/lib/dpkg/alternatives/iptables
	I1114 14:05:02.824052 1255771 oci.go:144] the created container "multinode-683928-m02" has a running status.
	I1114 14:05:02.824084 1255771 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928-m02/id_rsa...
	I1114 14:05:04.161107 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1114 14:05:04.161160 1255771 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1114 14:05:04.182976 1255771 cli_runner.go:164] Run: docker container inspect multinode-683928-m02 --format={{.State.Status}}
	I1114 14:05:04.201149 1255771 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1114 14:05:04.201173 1255771 kic_runner.go:114] Args: [docker exec --privileged multinode-683928-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1114 14:05:04.263420 1255771 cli_runner.go:164] Run: docker container inspect multinode-683928-m02 --format={{.State.Status}}
	I1114 14:05:04.282563 1255771 machine.go:88] provisioning docker machine ...
	I1114 14:05:04.282593 1255771 ubuntu.go:169] provisioning hostname "multinode-683928-m02"
	I1114 14:05:04.282662 1255771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928-m02
	I1114 14:05:04.301801 1255771 main.go:141] libmachine: Using SSH client type: native
	I1114 14:05:04.302226 1255771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34359 <nil> <nil>}
	I1114 14:05:04.302244 1255771 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-683928-m02 && echo "multinode-683928-m02" | sudo tee /etc/hostname
	I1114 14:05:04.460155 1255771 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-683928-m02
	
	I1114 14:05:04.460238 1255771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928-m02
	I1114 14:05:04.480067 1255771 main.go:141] libmachine: Using SSH client type: native
	I1114 14:05:04.480480 1255771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34359 <nil> <nil>}
	I1114 14:05:04.480504 1255771 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-683928-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-683928-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-683928-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 14:05:04.625942 1255771 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 14:05:04.625973 1255771 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17581-1186318/.minikube CaCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17581-1186318/.minikube}
	I1114 14:05:04.625990 1255771 ubuntu.go:177] setting up certificates
	I1114 14:05:04.625999 1255771 provision.go:83] configureAuth start
	I1114 14:05:04.626072 1255771 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-683928-m02
	I1114 14:05:04.646737 1255771 provision.go:138] copyHostCerts
	I1114 14:05:04.646783 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem
	I1114 14:05:04.646816 1255771 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem, removing ...
	I1114 14:05:04.646827 1255771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem
	I1114 14:05:04.646909 1255771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem (1082 bytes)
	I1114 14:05:04.646997 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem
	I1114 14:05:04.647019 1255771 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem, removing ...
	I1114 14:05:04.647024 1255771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem
	I1114 14:05:04.647057 1255771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem (1123 bytes)
	I1114 14:05:04.647104 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem
	I1114 14:05:04.647122 1255771 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem, removing ...
	I1114 14:05:04.647130 1255771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem
	I1114 14:05:04.647155 1255771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem (1675 bytes)
	I1114 14:05:04.647204 1255771 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem org=jenkins.multinode-683928-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-683928-m02]
	I1114 14:05:05.422064 1255771 provision.go:172] copyRemoteCerts
	I1114 14:05:05.422138 1255771 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 14:05:05.422180 1255771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928-m02
	I1114 14:05:05.441805 1255771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928-m02/id_rsa Username:docker}
	I1114 14:05:05.547606 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1114 14:05:05.547675 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 14:05:05.577965 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1114 14:05:05.578094 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1114 14:05:05.607954 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1114 14:05:05.608021 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 14:05:05.637632 1255771 provision.go:86] duration metric: configureAuth took 1.011619549s
	I1114 14:05:05.637703 1255771 ubuntu.go:193] setting minikube options for container-runtime
	I1114 14:05:05.637924 1255771 config.go:182] Loaded profile config "multinode-683928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 14:05:05.638061 1255771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928-m02
	I1114 14:05:05.656436 1255771 main.go:141] libmachine: Using SSH client type: native
	I1114 14:05:05.656948 1255771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34359 <nil> <nil>}
	I1114 14:05:05.656970 1255771 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 14:05:05.925481 1255771 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 14:05:05.925544 1255771 machine.go:91] provisioned docker machine in 1.642960657s
	I1114 14:05:05.925571 1255771 client.go:171] LocalClient.Create took 8.760211698s
	I1114 14:05:05.925620 1255771 start.go:167] duration metric: libmachine.API.Create for "multinode-683928" took 8.760287898s
	I1114 14:05:05.925649 1255771 start.go:300] post-start starting for "multinode-683928-m02" (driver="docker")
	I1114 14:05:05.925675 1255771 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 14:05:05.925772 1255771 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 14:05:05.925846 1255771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928-m02
	I1114 14:05:05.946762 1255771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928-m02/id_rsa Username:docker}
	I1114 14:05:06.053279 1255771 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 14:05:06.058388 1255771 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I1114 14:05:06.058418 1255771 command_runner.go:130] > NAME="Ubuntu"
	I1114 14:05:06.058427 1255771 command_runner.go:130] > VERSION_ID="22.04"
	I1114 14:05:06.058434 1255771 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I1114 14:05:06.058440 1255771 command_runner.go:130] > VERSION_CODENAME=jammy
	I1114 14:05:06.058445 1255771 command_runner.go:130] > ID=ubuntu
	I1114 14:05:06.058450 1255771 command_runner.go:130] > ID_LIKE=debian
	I1114 14:05:06.058457 1255771 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1114 14:05:06.058469 1255771 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1114 14:05:06.058477 1255771 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1114 14:05:06.058491 1255771 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1114 14:05:06.058497 1255771 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I1114 14:05:06.058546 1255771 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1114 14:05:06.058578 1255771 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1114 14:05:06.058594 1255771 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1114 14:05:06.058603 1255771 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1114 14:05:06.058621 1255771 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1186318/.minikube/addons for local assets ...
	I1114 14:05:06.058694 1255771 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1186318/.minikube/files for local assets ...
	I1114 14:05:06.058785 1255771 filesync.go:149] local asset: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem -> 11916902.pem in /etc/ssl/certs
	I1114 14:05:06.058797 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem -> /etc/ssl/certs/11916902.pem
	I1114 14:05:06.058902 1255771 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 14:05:06.071413 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem --> /etc/ssl/certs/11916902.pem (1708 bytes)
	I1114 14:05:06.105999 1255771 start.go:303] post-start completed in 180.322492ms
	I1114 14:05:06.106398 1255771 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-683928-m02
	I1114 14:05:06.125669 1255771 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/config.json ...
	I1114 14:05:06.125975 1255771 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 14:05:06.126033 1255771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928-m02
	I1114 14:05:06.145778 1255771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928-m02/id_rsa Username:docker}
	I1114 14:05:06.242847 1255771 command_runner.go:130] > 12%!
	(MISSING)I1114 14:05:06.242928 1255771 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1114 14:05:06.248668 1255771 command_runner.go:130] > 172G
	I1114 14:05:06.249213 1255771 start.go:128] duration metric: createHost completed in 9.086254522s
	I1114 14:05:06.249233 1255771 start.go:83] releasing machines lock for "multinode-683928-m02", held for 9.086382923s
	I1114 14:05:06.249307 1255771 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-683928-m02
	I1114 14:05:06.278543 1255771 out.go:177] * Found network options:
	I1114 14:05:06.280722 1255771 out.go:177]   - NO_PROXY=192.168.58.2
	W1114 14:05:06.283180 1255771 proxy.go:119] fail to check proxy env: Error ip not in block
	W1114 14:05:06.283225 1255771 proxy.go:119] fail to check proxy env: Error ip not in block
	I1114 14:05:06.283297 1255771 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 14:05:06.283342 1255771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928-m02
	I1114 14:05:06.283375 1255771 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 14:05:06.283433 1255771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928-m02
	I1114 14:05:06.307450 1255771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928-m02/id_rsa Username:docker}
	I1114 14:05:06.319937 1255771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928-m02/id_rsa Username:docker}
	I1114 14:05:06.548129 1255771 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1114 14:05:06.589704 1255771 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1114 14:05:06.595714 1255771 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I1114 14:05:06.595741 1255771 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I1114 14:05:06.595756 1255771 command_runner.go:130] > Device: b3h/179d	Inode: 1571320     Links: 1
	I1114 14:05:06.595764 1255771 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1114 14:05:06.595771 1255771 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I1114 14:05:06.595778 1255771 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I1114 14:05:06.595789 1255771 command_runner.go:130] > Change: 2023-11-14 13:34:26.425737793 +0000
	I1114 14:05:06.595796 1255771 command_runner.go:130] >  Birth: 2023-11-14 13:34:26.425737793 +0000
	I1114 14:05:06.596026 1255771 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 14:05:06.621106 1255771 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1114 14:05:06.621275 1255771 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 14:05:06.665038 1255771 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I1114 14:05:06.665085 1255771 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1114 14:05:06.665094 1255771 start.go:472] detecting cgroup driver to use...
	I1114 14:05:06.665127 1255771 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1114 14:05:06.665185 1255771 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 14:05:06.686120 1255771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 14:05:06.700632 1255771 docker.go:203] disabling cri-docker service (if available) ...
	I1114 14:05:06.700701 1255771 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 14:05:06.718131 1255771 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 14:05:06.736000 1255771 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1114 14:05:06.831331 1255771 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 14:05:06.940604 1255771 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1114 14:05:06.940721 1255771 docker.go:219] disabling docker service ...
	I1114 14:05:06.940812 1255771 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 14:05:06.963881 1255771 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 14:05:06.980989 1255771 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 14:05:07.075606 1255771 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1114 14:05:07.075698 1255771 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 14:05:07.190855 1255771 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1114 14:05:07.190929 1255771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 14:05:07.205880 1255771 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 14:05:07.225601 1255771 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1114 14:05:07.226986 1255771 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1114 14:05:07.227090 1255771 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:05:07.239353 1255771 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1114 14:05:07.239475 1255771 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:05:07.253440 1255771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:05:07.265631 1255771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:05:07.278179 1255771 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1114 14:05:07.289429 1255771 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1114 14:05:07.300309 1255771 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1114 14:05:07.300468 1255771 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1114 14:05:07.310996 1255771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1114 14:05:07.413160 1255771 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1114 14:05:07.546397 1255771 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1114 14:05:07.546468 1255771 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1114 14:05:07.551107 1255771 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1114 14:05:07.551169 1255771 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1114 14:05:07.551192 1255771 command_runner.go:130] > Device: bch/188d	Inode: 190         Links: 1
	I1114 14:05:07.551223 1255771 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1114 14:05:07.551248 1255771 command_runner.go:130] > Access: 2023-11-14 14:05:07.530059237 +0000
	I1114 14:05:07.551271 1255771 command_runner.go:130] > Modify: 2023-11-14 14:05:07.530059237 +0000
	I1114 14:05:07.551293 1255771 command_runner.go:130] > Change: 2023-11-14 14:05:07.530059237 +0000
	I1114 14:05:07.551313 1255771 command_runner.go:130] >  Birth: -
	I1114 14:05:07.551778 1255771 start.go:540] Will wait 60s for crictl version
	I1114 14:05:07.551838 1255771 ssh_runner.go:195] Run: which crictl
	I1114 14:05:07.556114 1255771 command_runner.go:130] > /usr/bin/crictl
	I1114 14:05:07.556490 1255771 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1114 14:05:07.598532 1255771 command_runner.go:130] > Version:  0.1.0
	I1114 14:05:07.598582 1255771 command_runner.go:130] > RuntimeName:  cri-o
	I1114 14:05:07.598823 1255771 command_runner.go:130] > RuntimeVersion:  1.24.6
	I1114 14:05:07.598905 1255771 command_runner.go:130] > RuntimeApiVersion:  v1
	I1114 14:05:07.601801 1255771 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1114 14:05:07.601892 1255771 ssh_runner.go:195] Run: crio --version
	I1114 14:05:07.646868 1255771 command_runner.go:130] > crio version 1.24.6
	I1114 14:05:07.646937 1255771 command_runner.go:130] > Version:          1.24.6
	I1114 14:05:07.646961 1255771 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1114 14:05:07.646984 1255771 command_runner.go:130] > GitTreeState:     clean
	I1114 14:05:07.647021 1255771 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1114 14:05:07.647047 1255771 command_runner.go:130] > GoVersion:        go1.18.2
	I1114 14:05:07.647068 1255771 command_runner.go:130] > Compiler:         gc
	I1114 14:05:07.647105 1255771 command_runner.go:130] > Platform:         linux/arm64
	I1114 14:05:07.647126 1255771 command_runner.go:130] > Linkmode:         dynamic
	I1114 14:05:07.647150 1255771 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1114 14:05:07.647179 1255771 command_runner.go:130] > SeccompEnabled:   true
	I1114 14:05:07.647205 1255771 command_runner.go:130] > AppArmorEnabled:  false
	I1114 14:05:07.649043 1255771 ssh_runner.go:195] Run: crio --version
	I1114 14:05:07.698346 1255771 command_runner.go:130] > crio version 1.24.6
	I1114 14:05:07.698370 1255771 command_runner.go:130] > Version:          1.24.6
	I1114 14:05:07.698390 1255771 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I1114 14:05:07.698397 1255771 command_runner.go:130] > GitTreeState:     clean
	I1114 14:05:07.698405 1255771 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I1114 14:05:07.698411 1255771 command_runner.go:130] > GoVersion:        go1.18.2
	I1114 14:05:07.698421 1255771 command_runner.go:130] > Compiler:         gc
	I1114 14:05:07.698427 1255771 command_runner.go:130] > Platform:         linux/arm64
	I1114 14:05:07.698438 1255771 command_runner.go:130] > Linkmode:         dynamic
	I1114 14:05:07.698448 1255771 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1114 14:05:07.698464 1255771 command_runner.go:130] > SeccompEnabled:   true
	I1114 14:05:07.698473 1255771 command_runner.go:130] > AppArmorEnabled:  false
	I1114 14:05:07.702356 1255771 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1114 14:05:07.704199 1255771 out.go:177]   - env NO_PROXY=192.168.58.2
	I1114 14:05:07.706017 1255771 cli_runner.go:164] Run: docker network inspect multinode-683928 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1114 14:05:07.724249 1255771 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I1114 14:05:07.729122 1255771 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 14:05:07.743049 1255771 certs.go:56] Setting up /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928 for IP: 192.168.58.3
	I1114 14:05:07.743084 1255771 certs.go:190] acquiring lock for shared ca certs: {Name:mk1fdfc415c611904fd8e5ce757e79f4579c67a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1114 14:05:07.743220 1255771 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.key
	I1114 14:05:07.743262 1255771 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.key
	I1114 14:05:07.743279 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1114 14:05:07.743293 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1114 14:05:07.743307 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1114 14:05:07.743318 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1114 14:05:07.743375 1255771 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/1191690.pem (1338 bytes)
	W1114 14:05:07.743410 1255771 certs.go:433] ignoring /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/1191690_empty.pem, impossibly tiny 0 bytes
	I1114 14:05:07.743423 1255771 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem (1675 bytes)
	I1114 14:05:07.743461 1255771 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem (1082 bytes)
	I1114 14:05:07.743493 1255771 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem (1123 bytes)
	I1114 14:05:07.743522 1255771 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem (1675 bytes)
	I1114 14:05:07.743577 1255771 certs.go:437] found cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem (1708 bytes)
	I1114 14:05:07.743609 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/1191690.pem -> /usr/share/ca-certificates/1191690.pem
	I1114 14:05:07.743636 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem -> /usr/share/ca-certificates/11916902.pem
	I1114 14:05:07.743651 1255771 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:05:07.744039 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1114 14:05:07.773717 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1114 14:05:07.803608 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1114 14:05:07.834752 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1114 14:05:07.864629 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/1191690.pem --> /usr/share/ca-certificates/1191690.pem (1338 bytes)
	I1114 14:05:07.902114 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem --> /usr/share/ca-certificates/11916902.pem (1708 bytes)
	I1114 14:05:07.933610 1255771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1114 14:05:07.963863 1255771 ssh_runner.go:195] Run: openssl version
	I1114 14:05:07.971069 1255771 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I1114 14:05:07.971503 1255771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11916902.pem && ln -fs /usr/share/ca-certificates/11916902.pem /etc/ssl/certs/11916902.pem"
	I1114 14:05:07.984066 1255771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11916902.pem
	I1114 14:05:07.989025 1255771 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 14 13:42 /usr/share/ca-certificates/11916902.pem
	I1114 14:05:07.989377 1255771 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 14 13:42 /usr/share/ca-certificates/11916902.pem
	I1114 14:05:07.989467 1255771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11916902.pem
	I1114 14:05:07.997997 1255771 command_runner.go:130] > 3ec20f2e
	I1114 14:05:07.998444 1255771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11916902.pem /etc/ssl/certs/3ec20f2e.0"
	I1114 14:05:08.012622 1255771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1114 14:05:08.025992 1255771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:05:08.031120 1255771 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 14 13:34 /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:05:08.031162 1255771 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 14 13:34 /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:05:08.031298 1255771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1114 14:05:08.040736 1255771 command_runner.go:130] > b5213941
	I1114 14:05:08.040842 1255771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1114 14:05:08.053910 1255771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1191690.pem && ln -fs /usr/share/ca-certificates/1191690.pem /etc/ssl/certs/1191690.pem"
	I1114 14:05:08.066688 1255771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1191690.pem
	I1114 14:05:08.071545 1255771 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 14 13:42 /usr/share/ca-certificates/1191690.pem
	I1114 14:05:08.071848 1255771 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 14 13:42 /usr/share/ca-certificates/1191690.pem
	I1114 14:05:08.071932 1255771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1191690.pem
	I1114 14:05:08.080718 1255771 command_runner.go:130] > 51391683
	I1114 14:05:08.081329 1255771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1191690.pem /etc/ssl/certs/51391683.0"
	I1114 14:05:08.095091 1255771 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1114 14:05:08.099948 1255771 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 14:05:08.099984 1255771 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1114 14:05:08.100121 1255771 ssh_runner.go:195] Run: crio config
	I1114 14:05:08.154779 1255771 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1114 14:05:08.154804 1255771 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1114 14:05:08.154813 1255771 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1114 14:05:08.154817 1255771 command_runner.go:130] > #
	I1114 14:05:08.154828 1255771 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1114 14:05:08.154836 1255771 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1114 14:05:08.154844 1255771 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1114 14:05:08.154857 1255771 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1114 14:05:08.154866 1255771 command_runner.go:130] > # reload'.
	I1114 14:05:08.154874 1255771 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1114 14:05:08.154883 1255771 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1114 14:05:08.154901 1255771 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1114 14:05:08.154913 1255771 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1114 14:05:08.154917 1255771 command_runner.go:130] > [crio]
	I1114 14:05:08.154925 1255771 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1114 14:05:08.154934 1255771 command_runner.go:130] > # containers images, in this directory.
	I1114 14:05:08.154942 1255771 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1114 14:05:08.154953 1255771 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1114 14:05:08.154959 1255771 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I1114 14:05:08.154967 1255771 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1114 14:05:08.154977 1255771 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1114 14:05:08.154983 1255771 command_runner.go:130] > # storage_driver = "vfs"
	I1114 14:05:08.154992 1255771 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1114 14:05:08.155002 1255771 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1114 14:05:08.155007 1255771 command_runner.go:130] > # storage_option = [
	I1114 14:05:08.155011 1255771 command_runner.go:130] > # ]
	I1114 14:05:08.155021 1255771 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1114 14:05:08.155029 1255771 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1114 14:05:08.155043 1255771 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1114 14:05:08.155050 1255771 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1114 14:05:08.155060 1255771 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1114 14:05:08.155065 1255771 command_runner.go:130] > # always happen on a node reboot
	I1114 14:05:08.155076 1255771 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1114 14:05:08.155083 1255771 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1114 14:05:08.155090 1255771 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1114 14:05:08.155103 1255771 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1114 14:05:08.155110 1255771 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1114 14:05:08.155122 1255771 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1114 14:05:08.155132 1255771 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1114 14:05:08.155139 1255771 command_runner.go:130] > # internal_wipe = true
	I1114 14:05:08.155146 1255771 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1114 14:05:08.155156 1255771 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1114 14:05:08.155163 1255771 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1114 14:05:08.155172 1255771 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1114 14:05:08.155179 1255771 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1114 14:05:08.155184 1255771 command_runner.go:130] > [crio.api]
	I1114 14:05:08.155198 1255771 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1114 14:05:08.155205 1255771 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1114 14:05:08.155214 1255771 command_runner.go:130] > # IP address on which the stream server will listen.
	I1114 14:05:08.155219 1255771 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1114 14:05:08.155227 1255771 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1114 14:05:08.155236 1255771 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1114 14:05:08.155241 1255771 command_runner.go:130] > # stream_port = "0"
	I1114 14:05:08.155248 1255771 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1114 14:05:08.155255 1255771 command_runner.go:130] > # stream_enable_tls = false
	I1114 14:05:08.155263 1255771 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1114 14:05:08.155270 1255771 command_runner.go:130] > # stream_idle_timeout = ""
	I1114 14:05:08.155278 1255771 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1114 14:05:08.155286 1255771 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1114 14:05:08.155294 1255771 command_runner.go:130] > # minutes.
	I1114 14:05:08.155299 1255771 command_runner.go:130] > # stream_tls_cert = ""
	I1114 14:05:08.155307 1255771 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1114 14:05:08.155318 1255771 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1114 14:05:08.155325 1255771 command_runner.go:130] > # stream_tls_key = ""
	I1114 14:05:08.155334 1255771 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1114 14:05:08.155345 1255771 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1114 14:05:08.155351 1255771 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1114 14:05:08.155357 1255771 command_runner.go:130] > # stream_tls_ca = ""
	I1114 14:05:08.155369 1255771 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1114 14:05:08.155375 1255771 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1114 14:05:08.155387 1255771 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1114 14:05:08.155393 1255771 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1114 14:05:08.155425 1255771 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1114 14:05:08.155439 1255771 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1114 14:05:08.155445 1255771 command_runner.go:130] > [crio.runtime]
	I1114 14:05:08.155452 1255771 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1114 14:05:08.155462 1255771 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1114 14:05:08.155467 1255771 command_runner.go:130] > # "nofile=1024:2048"
	I1114 14:05:08.155477 1255771 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1114 14:05:08.155482 1255771 command_runner.go:130] > # default_ulimits = [
	I1114 14:05:08.155487 1255771 command_runner.go:130] > # ]
	I1114 14:05:08.155494 1255771 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1114 14:05:08.155505 1255771 command_runner.go:130] > # no_pivot = false
	I1114 14:05:08.155512 1255771 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1114 14:05:08.155520 1255771 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1114 14:05:08.155529 1255771 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1114 14:05:08.155539 1255771 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1114 14:05:08.155547 1255771 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1114 14:05:08.155555 1255771 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1114 14:05:08.155564 1255771 command_runner.go:130] > # conmon = ""
	I1114 14:05:08.155569 1255771 command_runner.go:130] > # Cgroup setting for conmon
	I1114 14:05:08.155578 1255771 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1114 14:05:08.155586 1255771 command_runner.go:130] > conmon_cgroup = "pod"
	I1114 14:05:08.155596 1255771 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1114 14:05:08.155606 1255771 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1114 14:05:08.155614 1255771 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1114 14:05:08.155635 1255771 command_runner.go:130] > # conmon_env = [
	I1114 14:05:08.155640 1255771 command_runner.go:130] > # ]
	I1114 14:05:08.155646 1255771 command_runner.go:130] > # Additional environment variables to set for all the
	I1114 14:05:08.155655 1255771 command_runner.go:130] > # containers. These are overridden if set in the
	I1114 14:05:08.155663 1255771 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1114 14:05:08.155668 1255771 command_runner.go:130] > # default_env = [
	I1114 14:05:08.155675 1255771 command_runner.go:130] > # ]
	I1114 14:05:08.155682 1255771 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1114 14:05:08.155696 1255771 command_runner.go:130] > # selinux = false
	I1114 14:05:08.155704 1255771 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1114 14:05:08.155712 1255771 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1114 14:05:08.155722 1255771 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1114 14:05:08.156840 1255771 command_runner.go:130] > # seccomp_profile = ""
	I1114 14:05:08.156905 1255771 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1114 14:05:08.156928 1255771 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1114 14:05:08.156952 1255771 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1114 14:05:08.156988 1255771 command_runner.go:130] > # which might increase security.
	I1114 14:05:08.157016 1255771 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I1114 14:05:08.157040 1255771 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1114 14:05:08.157077 1255771 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1114 14:05:08.157104 1255771 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1114 14:05:08.157127 1255771 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1114 14:05:08.157161 1255771 command_runner.go:130] > # This option supports live configuration reload.
	I1114 14:05:08.157185 1255771 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1114 14:05:08.157207 1255771 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1114 14:05:08.157243 1255771 command_runner.go:130] > # the cgroup blockio controller.
	I1114 14:05:08.157266 1255771 command_runner.go:130] > # blockio_config_file = ""
	I1114 14:05:08.157288 1255771 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1114 14:05:08.157321 1255771 command_runner.go:130] > # irqbalance daemon.
	I1114 14:05:08.157349 1255771 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1114 14:05:08.157372 1255771 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1114 14:05:08.157409 1255771 command_runner.go:130] > # This option supports live configuration reload.
	I1114 14:05:08.157433 1255771 command_runner.go:130] > # rdt_config_file = ""
	I1114 14:05:08.157455 1255771 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1114 14:05:08.157490 1255771 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1114 14:05:08.157515 1255771 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1114 14:05:08.157535 1255771 command_runner.go:130] > # separate_pull_cgroup = ""
	I1114 14:05:08.157571 1255771 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1114 14:05:08.157595 1255771 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1114 14:05:08.157615 1255771 command_runner.go:130] > # will be added.
	I1114 14:05:08.157650 1255771 command_runner.go:130] > # default_capabilities = [
	I1114 14:05:08.157672 1255771 command_runner.go:130] > # 	"CHOWN",
	I1114 14:05:08.157689 1255771 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1114 14:05:08.157709 1255771 command_runner.go:130] > # 	"FSETID",
	I1114 14:05:08.157740 1255771 command_runner.go:130] > # 	"FOWNER",
	I1114 14:05:08.157764 1255771 command_runner.go:130] > # 	"SETGID",
	I1114 14:05:08.157784 1255771 command_runner.go:130] > # 	"SETUID",
	I1114 14:05:08.157817 1255771 command_runner.go:130] > # 	"SETPCAP",
	I1114 14:05:08.157840 1255771 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1114 14:05:08.157860 1255771 command_runner.go:130] > # 	"KILL",
	I1114 14:05:08.157879 1255771 command_runner.go:130] > # ]
	I1114 14:05:08.157916 1255771 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1114 14:05:08.157943 1255771 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1114 14:05:08.157969 1255771 command_runner.go:130] > # add_inheritable_capabilities = true
	I1114 14:05:08.158002 1255771 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1114 14:05:08.158027 1255771 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1114 14:05:08.158047 1255771 command_runner.go:130] > # default_sysctls = [
	I1114 14:05:08.158079 1255771 command_runner.go:130] > # ]
	I1114 14:05:08.158100 1255771 command_runner.go:130] > # List of devices on the host that a
	I1114 14:05:08.158122 1255771 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1114 14:05:08.158141 1255771 command_runner.go:130] > # allowed_devices = [
	I1114 14:05:08.158261 1255771 command_runner.go:130] > # 	"/dev/fuse",
	I1114 14:05:08.158281 1255771 command_runner.go:130] > # ]
	I1114 14:05:08.158302 1255771 command_runner.go:130] > # List of additional devices. specified as
	I1114 14:05:08.158412 1255771 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1114 14:05:08.158503 1255771 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1114 14:05:08.158526 1255771 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1114 14:05:08.158560 1255771 command_runner.go:130] > # additional_devices = [
	I1114 14:05:08.158580 1255771 command_runner.go:130] > # ]
	I1114 14:05:08.158603 1255771 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1114 14:05:08.158638 1255771 command_runner.go:130] > # cdi_spec_dirs = [
	I1114 14:05:08.158658 1255771 command_runner.go:130] > # 	"/etc/cdi",
	I1114 14:05:08.158678 1255771 command_runner.go:130] > # 	"/var/run/cdi",
	I1114 14:05:08.158710 1255771 command_runner.go:130] > # ]
	I1114 14:05:08.158749 1255771 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1114 14:05:08.158771 1255771 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1114 14:05:08.158813 1255771 command_runner.go:130] > # Defaults to false.
	I1114 14:05:08.158835 1255771 command_runner.go:130] > # device_ownership_from_security_context = false
	I1114 14:05:08.158857 1255771 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1114 14:05:08.158901 1255771 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1114 14:05:08.158919 1255771 command_runner.go:130] > # hooks_dir = [
	I1114 14:05:08.158939 1255771 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1114 14:05:08.158979 1255771 command_runner.go:130] > # ]
	I1114 14:05:08.159002 1255771 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1114 14:05:08.159039 1255771 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1114 14:05:08.159068 1255771 command_runner.go:130] > # its default mounts from the following two files:
	I1114 14:05:08.159086 1255771 command_runner.go:130] > #
	I1114 14:05:08.159120 1255771 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1114 14:05:08.159151 1255771 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1114 14:05:08.159173 1255771 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1114 14:05:08.159205 1255771 command_runner.go:130] > #
	I1114 14:05:08.159227 1255771 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1114 14:05:08.159250 1255771 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1114 14:05:08.159287 1255771 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1114 14:05:08.159309 1255771 command_runner.go:130] > #      only add mounts it finds in this file.
	I1114 14:05:08.159329 1255771 command_runner.go:130] > #
	I1114 14:05:08.159363 1255771 command_runner.go:130] > # default_mounts_file = ""
	I1114 14:05:08.159386 1255771 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1114 14:05:08.159409 1255771 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1114 14:05:08.159441 1255771 command_runner.go:130] > # pids_limit = 0
	I1114 14:05:08.159466 1255771 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1114 14:05:08.159491 1255771 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1114 14:05:08.159536 1255771 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1114 14:05:08.159563 1255771 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1114 14:05:08.159597 1255771 command_runner.go:130] > # log_size_max = -1
	I1114 14:05:08.159638 1255771 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1114 14:05:08.159657 1255771 command_runner.go:130] > # log_to_journald = false
	I1114 14:05:08.159701 1255771 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1114 14:05:08.159722 1255771 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1114 14:05:08.159742 1255771 command_runner.go:130] > # Path to directory for container attach sockets.
	I1114 14:05:08.159783 1255771 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1114 14:05:08.159805 1255771 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1114 14:05:08.159826 1255771 command_runner.go:130] > # bind_mount_prefix = ""
	I1114 14:05:08.159869 1255771 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1114 14:05:08.159888 1255771 command_runner.go:130] > # read_only = false
	I1114 14:05:08.159909 1255771 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1114 14:05:08.159944 1255771 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1114 14:05:08.159967 1255771 command_runner.go:130] > # live configuration reload.
	I1114 14:05:08.159990 1255771 command_runner.go:130] > # log_level = "info"
	I1114 14:05:08.160033 1255771 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1114 14:05:08.160053 1255771 command_runner.go:130] > # This option supports live configuration reload.
	I1114 14:05:08.160074 1255771 command_runner.go:130] > # log_filter = ""
	I1114 14:05:08.160117 1255771 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1114 14:05:08.160138 1255771 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1114 14:05:08.160157 1255771 command_runner.go:130] > # separated by comma.
	I1114 14:05:08.160192 1255771 command_runner.go:130] > # uid_mappings = ""
	I1114 14:05:08.160216 1255771 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1114 14:05:08.160239 1255771 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1114 14:05:08.160282 1255771 command_runner.go:130] > # separated by comma.
	I1114 14:05:08.160301 1255771 command_runner.go:130] > # gid_mappings = ""
	I1114 14:05:08.160323 1255771 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1114 14:05:08.160359 1255771 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1114 14:05:08.160394 1255771 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1114 14:05:08.160427 1255771 command_runner.go:130] > # minimum_mappable_uid = -1
	I1114 14:05:08.160459 1255771 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1114 14:05:08.160481 1255771 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1114 14:05:08.160516 1255771 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1114 14:05:08.160932 1255771 command_runner.go:130] > # minimum_mappable_gid = -1
	I1114 14:05:08.160992 1255771 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1114 14:05:08.161016 1255771 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1114 14:05:08.161054 1255771 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1114 14:05:08.161073 1255771 command_runner.go:130] > # ctr_stop_timeout = 30
	I1114 14:05:08.161087 1255771 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1114 14:05:08.161103 1255771 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1114 14:05:08.161110 1255771 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1114 14:05:08.161122 1255771 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1114 14:05:08.161127 1255771 command_runner.go:130] > # drop_infra_ctr = true
	I1114 14:05:08.161135 1255771 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1114 14:05:08.161158 1255771 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1114 14:05:08.161189 1255771 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1114 14:05:08.161198 1255771 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1114 14:05:08.161206 1255771 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1114 14:05:08.161219 1255771 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1114 14:05:08.161225 1255771 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1114 14:05:08.161234 1255771 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1114 14:05:08.161254 1255771 command_runner.go:130] > # pinns_path = ""
	I1114 14:05:08.161273 1255771 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1114 14:05:08.161287 1255771 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1114 14:05:08.161295 1255771 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1114 14:05:08.161301 1255771 command_runner.go:130] > # default_runtime = "runc"
	I1114 14:05:08.161311 1255771 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1114 14:05:08.161322 1255771 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1114 14:05:08.161374 1255771 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1114 14:05:08.161387 1255771 command_runner.go:130] > # creation as a file is not desired either.
	I1114 14:05:08.161397 1255771 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1114 14:05:08.161403 1255771 command_runner.go:130] > # the hostname is being managed dynamically.
	I1114 14:05:08.161414 1255771 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1114 14:05:08.161419 1255771 command_runner.go:130] > # ]
	I1114 14:05:08.161427 1255771 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1114 14:05:08.161438 1255771 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1114 14:05:08.161457 1255771 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1114 14:05:08.161471 1255771 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1114 14:05:08.161477 1255771 command_runner.go:130] > #
	I1114 14:05:08.161497 1255771 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1114 14:05:08.161505 1255771 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1114 14:05:08.161510 1255771 command_runner.go:130] > #  runtime_type = "oci"
	I1114 14:05:08.161516 1255771 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1114 14:05:08.161524 1255771 command_runner.go:130] > #  privileged_without_host_devices = false
	I1114 14:05:08.161530 1255771 command_runner.go:130] > #  allowed_annotations = []
	I1114 14:05:08.161540 1255771 command_runner.go:130] > # Where:
	I1114 14:05:08.161547 1255771 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1114 14:05:08.161565 1255771 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1114 14:05:08.161580 1255771 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1114 14:05:08.161601 1255771 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1114 14:05:08.161614 1255771 command_runner.go:130] > #   in $PATH.
	I1114 14:05:08.161621 1255771 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1114 14:05:08.161627 1255771 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1114 14:05:08.161636 1255771 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1114 14:05:08.161643 1255771 command_runner.go:130] > #   state.
	I1114 14:05:08.161651 1255771 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1114 14:05:08.161670 1255771 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1114 14:05:08.161683 1255771 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1114 14:05:08.161693 1255771 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1114 14:05:08.161704 1255771 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1114 14:05:08.161713 1255771 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1114 14:05:08.161722 1255771 command_runner.go:130] > #   The currently recognized values are:
	I1114 14:05:08.161730 1255771 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1114 14:05:08.161757 1255771 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1114 14:05:08.161773 1255771 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1114 14:05:08.161782 1255771 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1114 14:05:08.161795 1255771 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1114 14:05:08.161803 1255771 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1114 14:05:08.161825 1255771 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1114 14:05:08.161880 1255771 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1114 14:05:08.161904 1255771 command_runner.go:130] > #   should be moved to the container's cgroup
	I1114 14:05:08.161911 1255771 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1114 14:05:08.161917 1255771 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I1114 14:05:08.161922 1255771 command_runner.go:130] > runtime_type = "oci"
	I1114 14:05:08.161927 1255771 command_runner.go:130] > runtime_root = "/run/runc"
	I1114 14:05:08.161932 1255771 command_runner.go:130] > runtime_config_path = ""
	I1114 14:05:08.161937 1255771 command_runner.go:130] > monitor_path = ""
	I1114 14:05:08.161960 1255771 command_runner.go:130] > monitor_cgroup = ""
	I1114 14:05:08.161974 1255771 command_runner.go:130] > monitor_exec_cgroup = ""
	I1114 14:05:08.161992 1255771 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1114 14:05:08.162002 1255771 command_runner.go:130] > # running containers
	I1114 14:05:08.162008 1255771 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1114 14:05:08.162016 1255771 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1114 14:05:08.162024 1255771 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1114 14:05:08.162043 1255771 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1114 14:05:08.162056 1255771 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1114 14:05:08.162062 1255771 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1114 14:05:08.162086 1255771 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1114 14:05:08.162092 1255771 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1114 14:05:08.162098 1255771 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1114 14:05:08.162105 1255771 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1114 14:05:08.162113 1255771 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1114 14:05:08.162122 1255771 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1114 14:05:08.162130 1255771 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1114 14:05:08.162161 1255771 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1114 14:05:08.162178 1255771 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1114 14:05:08.162186 1255771 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1114 14:05:08.162203 1255771 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1114 14:05:08.162214 1255771 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1114 14:05:08.162225 1255771 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1114 14:05:08.162245 1255771 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1114 14:05:08.162256 1255771 command_runner.go:130] > # Example:
	I1114 14:05:08.162263 1255771 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1114 14:05:08.162269 1255771 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1114 14:05:08.162275 1255771 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1114 14:05:08.162296 1255771 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1114 14:05:08.162302 1255771 command_runner.go:130] > # cpuset = 0
	I1114 14:05:08.162312 1255771 command_runner.go:130] > # cpushares = "0-1"
	I1114 14:05:08.162317 1255771 command_runner.go:130] > # Where:
	I1114 14:05:08.162325 1255771 command_runner.go:130] > # The workload name is workload-type.
	I1114 14:05:08.162335 1255771 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1114 14:05:08.162342 1255771 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1114 14:05:08.162349 1255771 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1114 14:05:08.162371 1255771 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1114 14:05:08.162385 1255771 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1114 14:05:08.162390 1255771 command_runner.go:130] > # 
	I1114 14:05:08.162398 1255771 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1114 14:05:08.162406 1255771 command_runner.go:130] > #
	I1114 14:05:08.162413 1255771 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1114 14:05:08.162423 1255771 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1114 14:05:08.162441 1255771 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1114 14:05:08.162457 1255771 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1114 14:05:08.162476 1255771 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1114 14:05:08.162489 1255771 command_runner.go:130] > [crio.image]
	I1114 14:05:08.162498 1255771 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1114 14:05:08.162508 1255771 command_runner.go:130] > # default_transport = "docker://"
	I1114 14:05:08.162517 1255771 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1114 14:05:08.162524 1255771 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1114 14:05:08.162535 1255771 command_runner.go:130] > # global_auth_file = ""
	I1114 14:05:08.162542 1255771 command_runner.go:130] > # The image used to instantiate infra containers.
	I1114 14:05:08.162562 1255771 command_runner.go:130] > # This option supports live configuration reload.
	I1114 14:05:08.162585 1255771 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1114 14:05:08.162595 1255771 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1114 14:05:08.162606 1255771 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1114 14:05:08.162613 1255771 command_runner.go:130] > # This option supports live configuration reload.
	I1114 14:05:08.162624 1255771 command_runner.go:130] > # pause_image_auth_file = ""
	I1114 14:05:08.162632 1255771 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1114 14:05:08.162654 1255771 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1114 14:05:08.162669 1255771 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1114 14:05:08.162677 1255771 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1114 14:05:08.162684 1255771 command_runner.go:130] > # pause_command = "/pause"
	I1114 14:05:08.162693 1255771 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1114 14:05:08.162740 1255771 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1114 14:05:08.162765 1255771 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1114 14:05:08.162774 1255771 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1114 14:05:08.162780 1255771 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1114 14:05:08.162785 1255771 command_runner.go:130] > # signature_policy = ""
	I1114 14:05:08.162793 1255771 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1114 14:05:08.162810 1255771 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1114 14:05:08.162834 1255771 command_runner.go:130] > # changing them here.
	I1114 14:05:08.162846 1255771 command_runner.go:130] > # insecure_registries = [
	I1114 14:05:08.162851 1255771 command_runner.go:130] > # ]
	I1114 14:05:08.162859 1255771 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1114 14:05:08.162871 1255771 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1114 14:05:08.162877 1255771 command_runner.go:130] > # image_volumes = "mkdir"
	I1114 14:05:08.162900 1255771 command_runner.go:130] > # Temporary directory to use for storing big files
	I1114 14:05:08.162915 1255771 command_runner.go:130] > # big_files_temporary_dir = ""
	I1114 14:05:08.162927 1255771 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1114 14:05:08.162938 1255771 command_runner.go:130] > # CNI plugins.
	I1114 14:05:08.162944 1255771 command_runner.go:130] > [crio.network]
	I1114 14:05:08.162951 1255771 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1114 14:05:08.162958 1255771 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1114 14:05:08.162965 1255771 command_runner.go:130] > # cni_default_network = ""
	I1114 14:05:08.162973 1255771 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1114 14:05:08.162992 1255771 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1114 14:05:08.163008 1255771 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1114 14:05:08.163014 1255771 command_runner.go:130] > # plugin_dirs = [
	I1114 14:05:08.163024 1255771 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1114 14:05:08.163029 1255771 command_runner.go:130] > # ]
	I1114 14:05:08.163036 1255771 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1114 14:05:08.163042 1255771 command_runner.go:130] > [crio.metrics]
	I1114 14:05:08.163051 1255771 command_runner.go:130] > # Globally enable or disable metrics support.
	I1114 14:05:08.163080 1255771 command_runner.go:130] > # enable_metrics = false
	I1114 14:05:08.163093 1255771 command_runner.go:130] > # Specify enabled metrics collectors.
	I1114 14:05:08.163100 1255771 command_runner.go:130] > # Per default all metrics are enabled.
	I1114 14:05:08.163113 1255771 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1114 14:05:08.163122 1255771 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1114 14:05:08.163129 1255771 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1114 14:05:08.163139 1255771 command_runner.go:130] > # metrics_collectors = [
	I1114 14:05:08.163144 1255771 command_runner.go:130] > # 	"operations",
	I1114 14:05:08.163161 1255771 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1114 14:05:08.163173 1255771 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1114 14:05:08.163179 1255771 command_runner.go:130] > # 	"operations_errors",
	I1114 14:05:08.163184 1255771 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1114 14:05:08.163189 1255771 command_runner.go:130] > # 	"image_pulls_by_name",
	I1114 14:05:08.163203 1255771 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1114 14:05:08.163212 1255771 command_runner.go:130] > # 	"image_pulls_failures",
	I1114 14:05:08.163219 1255771 command_runner.go:130] > # 	"image_pulls_successes",
	I1114 14:05:08.163228 1255771 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1114 14:05:08.163234 1255771 command_runner.go:130] > # 	"image_layer_reuse",
	I1114 14:05:08.163240 1255771 command_runner.go:130] > # 	"containers_oom_total",
	I1114 14:05:08.163247 1255771 command_runner.go:130] > # 	"containers_oom",
	I1114 14:05:08.163252 1255771 command_runner.go:130] > # 	"processes_defunct",
	I1114 14:05:08.163257 1255771 command_runner.go:130] > # 	"operations_total",
	I1114 14:05:08.163265 1255771 command_runner.go:130] > # 	"operations_latency_seconds",
	I1114 14:05:08.163281 1255771 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1114 14:05:08.163293 1255771 command_runner.go:130] > # 	"operations_errors_total",
	I1114 14:05:08.163300 1255771 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1114 14:05:08.163319 1255771 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1114 14:05:08.163326 1255771 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1114 14:05:08.163335 1255771 command_runner.go:130] > # 	"image_pulls_success_total",
	I1114 14:05:08.163340 1255771 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1114 14:05:08.163350 1255771 command_runner.go:130] > # 	"containers_oom_count_total",
	I1114 14:05:08.163354 1255771 command_runner.go:130] > # ]
	I1114 14:05:08.163361 1255771 command_runner.go:130] > # The port on which the metrics server will listen.
	I1114 14:05:08.163370 1255771 command_runner.go:130] > # metrics_port = 9090
	I1114 14:05:08.163386 1255771 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1114 14:05:08.163400 1255771 command_runner.go:130] > # metrics_socket = ""
	I1114 14:05:08.163408 1255771 command_runner.go:130] > # The certificate for the secure metrics server.
	I1114 14:05:08.163419 1255771 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1114 14:05:08.163427 1255771 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1114 14:05:08.163437 1255771 command_runner.go:130] > # certificate on any modification event.
	I1114 14:05:08.163442 1255771 command_runner.go:130] > # metrics_cert = ""
	I1114 14:05:08.163451 1255771 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1114 14:05:08.163468 1255771 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1114 14:05:08.163481 1255771 command_runner.go:130] > # metrics_key = ""
	I1114 14:05:08.163489 1255771 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1114 14:05:08.163505 1255771 command_runner.go:130] > [crio.tracing]
	I1114 14:05:08.163520 1255771 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1114 14:05:08.163526 1255771 command_runner.go:130] > # enable_tracing = false
	I1114 14:05:08.163536 1255771 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1114 14:05:08.163542 1255771 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1114 14:05:08.163548 1255771 command_runner.go:130] > # Number of samples to collect per million spans.
	I1114 14:05:08.163554 1255771 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1114 14:05:08.163563 1255771 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1114 14:05:08.163581 1255771 command_runner.go:130] > [crio.stats]
	I1114 14:05:08.163605 1255771 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1114 14:05:08.163613 1255771 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1114 14:05:08.163629 1255771 command_runner.go:130] > # stats_collection_period = 0
	I1114 14:05:08.165831 1255771 command_runner.go:130] ! time="2023-11-14 14:05:08.151678300Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I1114 14:05:08.165863 1255771 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1114 14:05:08.165956 1255771 cni.go:84] Creating CNI manager for ""
	I1114 14:05:08.165982 1255771 cni.go:136] 2 nodes found, recommending kindnet
	I1114 14:05:08.166005 1255771 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1114 14:05:08.166036 1255771 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-683928 NodeName:multinode-683928-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1114 14:05:08.166167 1255771 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-683928-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1114 14:05:08.166222 1255771 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-683928-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:multinode-683928 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1114 14:05:08.166293 1255771 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1114 14:05:08.176460 1255771 command_runner.go:130] > kubeadm
	I1114 14:05:08.176479 1255771 command_runner.go:130] > kubectl
	I1114 14:05:08.176484 1255771 command_runner.go:130] > kubelet
	I1114 14:05:08.177702 1255771 binaries.go:44] Found k8s binaries, skipping transfer
	I1114 14:05:08.177804 1255771 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1114 14:05:08.188847 1255771 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1114 14:05:08.210730 1255771 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1114 14:05:08.233749 1255771 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1114 14:05:08.238872 1255771 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1114 14:05:08.254385 1255771 host.go:66] Checking if "multinode-683928" exists ...
	I1114 14:05:08.254656 1255771 start.go:304] JoinCluster: &{Name:multinode-683928 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:multinode-683928 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 14:05:08.254745 1255771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1114 14:05:08.254794 1255771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928
	I1114 14:05:08.255676 1255771 config.go:182] Loaded profile config "multinode-683928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 14:05:08.274223 1255771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928/id_rsa Username:docker}
	I1114 14:05:08.443601 1255771 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token xl48x1.nnlg4nn6ir3j5woq --discovery-token-ca-cert-hash sha256:1a1b25420be6487c50639ce0b981e16ee30b54e658d487c3adf6952ff2c4a2c6 
	I1114 14:05:08.447255 1255771 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1114 14:05:08.447294 1255771 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xl48x1.nnlg4nn6ir3j5woq --discovery-token-ca-cert-hash sha256:1a1b25420be6487c50639ce0b981e16ee30b54e658d487c3adf6952ff2c4a2c6 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-683928-m02"
	I1114 14:05:08.502768 1255771 command_runner.go:130] > [preflight] Running pre-flight checks
	I1114 14:05:08.540617 1255771 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I1114 14:05:08.540643 1255771 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1049-aws
	I1114 14:05:08.540651 1255771 command_runner.go:130] > OS: Linux
	I1114 14:05:08.540658 1255771 command_runner.go:130] > CGROUPS_CPU: enabled
	I1114 14:05:08.540665 1255771 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I1114 14:05:08.540672 1255771 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I1114 14:05:08.540685 1255771 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I1114 14:05:08.540692 1255771 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I1114 14:05:08.540699 1255771 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I1114 14:05:08.540706 1255771 command_runner.go:130] > CGROUPS_PIDS: enabled
	I1114 14:05:08.540713 1255771 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I1114 14:05:08.540719 1255771 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I1114 14:05:08.661838 1255771 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1114 14:05:08.661868 1255771 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1114 14:05:08.693570 1255771 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1114 14:05:08.693598 1255771 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1114 14:05:08.693607 1255771 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1114 14:05:08.794008 1255771 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1114 14:05:11.332087 1255771 command_runner.go:130] > This node has joined the cluster:
	I1114 14:05:11.332112 1255771 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1114 14:05:11.332120 1255771 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1114 14:05:11.332129 1255771 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1114 14:05:11.335386 1255771 command_runner.go:130] ! W1114 14:05:08.502181    1027 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1114 14:05:11.335422 1255771 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1049-aws\n", err: exit status 1
	I1114 14:05:11.335434 1255771 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1114 14:05:11.335447 1255771 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xl48x1.nnlg4nn6ir3j5woq --discovery-token-ca-cert-hash sha256:1a1b25420be6487c50639ce0b981e16ee30b54e658d487c3adf6952ff2c4a2c6 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-683928-m02": (2.888141736s)
	I1114 14:05:11.335468 1255771 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1114 14:05:11.554706 1255771 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I1114 14:05:11.554735 1255771 start.go:306] JoinCluster complete in 3.300077941s
	I1114 14:05:11.554746 1255771 cni.go:84] Creating CNI manager for ""
	I1114 14:05:11.554752 1255771 cni.go:136] 2 nodes found, recommending kindnet
	I1114 14:05:11.554807 1255771 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1114 14:05:11.559906 1255771 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1114 14:05:11.559981 1255771 command_runner.go:130] >   Size: 3841245   	Blocks: 7504       IO Block: 4096   regular file
	I1114 14:05:11.560005 1255771 command_runner.go:130] > Device: 3ah/58d	Inode: 1575160     Links: 1
	I1114 14:05:11.560029 1255771 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1114 14:05:11.560066 1255771 command_runner.go:130] > Access: 2023-05-09 19:54:42.000000000 +0000
	I1114 14:05:11.560091 1255771 command_runner.go:130] > Modify: 2023-05-09 19:54:42.000000000 +0000
	I1114 14:05:11.560114 1255771 command_runner.go:130] > Change: 2023-11-14 13:34:27.093734456 +0000
	I1114 14:05:11.560146 1255771 command_runner.go:130] >  Birth: 2023-11-14 13:34:27.053734656 +0000
	I1114 14:05:11.560506 1255771 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1114 14:05:11.560525 1255771 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1114 14:05:11.582641 1255771 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1114 14:05:11.902643 1255771 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1114 14:05:11.902710 1255771 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1114 14:05:11.902733 1255771 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1114 14:05:11.902756 1255771 command_runner.go:130] > daemonset.apps/kindnet configured
	I1114 14:05:11.903164 1255771 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 14:05:11.903472 1255771 kapi.go:59] client config for multinode-683928: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/client.key", CAFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 14:05:11.903921 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1114 14:05:11.903959 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:11.903982 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:11.904005 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:11.906790 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:11.906815 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:11.906824 1255771 round_trippers.go:580]     Audit-Id: 65eb14d9-856e-4de3-a8dc-573bbd43f911
	I1114 14:05:11.906830 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:11.906858 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:11.906870 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:11.906877 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:11.906883 1255771 round_trippers.go:580]     Content-Length: 291
	I1114 14:05:11.906892 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:11 GMT
	I1114 14:05:11.906915 1255771 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"ebff7a46-980b-417a-ba6d-f7dd75dbc9ce","resourceVersion":"421","creationTimestamp":"2023-11-14T14:04:09Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1114 14:05:11.907014 1255771 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-683928" context rescaled to 1 replicas
	I1114 14:05:11.907045 1255771 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1114 14:05:11.909980 1255771 out.go:177] * Verifying Kubernetes components...
	I1114 14:05:11.911929 1255771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:05:11.926288 1255771 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 14:05:11.926650 1255771 kapi.go:59] client config for multinode-683928: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/client.crt", KeyFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/multinode-683928/client.key", CAFile:"/home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16c4650), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1114 14:05:11.927002 1255771 node_ready.go:35] waiting up to 6m0s for node "multinode-683928-m02" to be "Ready" ...
	I1114 14:05:11.927109 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928-m02
	I1114 14:05:11.927144 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:11.927168 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:11.927191 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:11.929829 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:11.929854 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:11.929863 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:11.929869 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:11.929877 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:11.929907 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:11.929923 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:11 GMT
	I1114 14:05:11.929930 1255771 round_trippers.go:580]     Audit-Id: 074bb534-98ea-4b88-9fd4-ebd574cf5267
	I1114 14:05:11.930064 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928-m02","uid":"cc50001c-297d-41af-892c-eebeab7d42ac","resourceVersion":"457","creationTimestamp":"2023-11-14T14:05:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:05:11Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1114 14:05:11.930498 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928-m02
	I1114 14:05:11.930515 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:11.930524 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:11.930531 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:11.933136 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:11.933162 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:11.933172 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:11.933178 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:11.933185 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:11.933191 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:11 GMT
	I1114 14:05:11.933198 1255771 round_trippers.go:580]     Audit-Id: 817a9513-b9a6-492c-b0cf-564cac121eed
	I1114 14:05:11.933208 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:11.933480 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928-m02","uid":"cc50001c-297d-41af-892c-eebeab7d42ac","resourceVersion":"457","creationTimestamp":"2023-11-14T14:05:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:05:11Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5183 chars]
	I1114 14:05:12.434425 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928-m02
	I1114 14:05:12.434493 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:12.434517 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:12.434541 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:12.437558 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:12.437632 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:12.437655 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:12 GMT
	I1114 14:05:12.437678 1255771 round_trippers.go:580]     Audit-Id: d7b1bfa9-4dd1-4239-a8f0-5f5bee8e69ee
	I1114 14:05:12.437708 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:12.437716 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:12.437722 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:12.437740 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:12.437872 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928-m02","uid":"cc50001c-297d-41af-892c-eebeab7d42ac","resourceVersion":"469","creationTimestamp":"2023-11-14T14:05:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1114 14:05:12.934598 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928-m02
	I1114 14:05:12.934622 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:12.934631 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:12.934639 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:12.937353 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:12.937381 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:12.937391 1255771 round_trippers.go:580]     Audit-Id: f5e9df80-fd12-42dc-ad64-e1fa54b2be11
	I1114 14:05:12.937398 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:12.937405 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:12.937411 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:12.937418 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:12.937428 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:12 GMT
	I1114 14:05:12.937540 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928-m02","uid":"cc50001c-297d-41af-892c-eebeab7d42ac","resourceVersion":"469","creationTimestamp":"2023-11-14T14:05:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5292 chars]
	I1114 14:05:13.434088 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928-m02
	I1114 14:05:13.434111 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:13.434122 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:13.434130 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:13.436794 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:13.436821 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:13.436831 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:13.436838 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:13 GMT
	I1114 14:05:13.436844 1255771 round_trippers.go:580]     Audit-Id: 9841645e-7352-4313-9796-17620103458d
	I1114 14:05:13.436850 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:13.436856 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:13.436867 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:13.436997 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928-m02","uid":"cc50001c-297d-41af-892c-eebeab7d42ac","resourceVersion":"478","creationTimestamp":"2023-11-14T14:05:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I1114 14:05:13.437368 1255771 node_ready.go:49] node "multinode-683928-m02" has status "Ready":"True"
	I1114 14:05:13.437387 1255771 node_ready.go:38] duration metric: took 1.510349395s waiting for node "multinode-683928-m02" to be "Ready" ...
	I1114 14:05:13.437397 1255771 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 14:05:13.437476 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I1114 14:05:13.437486 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:13.437494 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:13.437507 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:13.442227 1255771 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1114 14:05:13.442254 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:13.442263 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:13.442281 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:13.442289 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:13.442297 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:13.442303 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:13 GMT
	I1114 14:05:13.442310 1255771 round_trippers.go:580]     Audit-Id: 6f62189e-eda5-4991-8a7e-0bf94c207d15
	I1114 14:05:13.442922 1255771 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"478"},"items":[{"metadata":{"name":"coredns-5dd5756b68-wxp87","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"51c2bd2a-c15e-4489-ad3b-7ca65e4ec898","resourceVersion":"417","creationTimestamp":"2023-11-14T14:04:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81b2172f-b4bf-4215-976d-efff1994decb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:04:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81b2172f-b4bf-4215-976d-efff1994decb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68970 chars]
	I1114 14:05:13.445871 1255771 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-wxp87" in "kube-system" namespace to be "Ready" ...
	I1114 14:05:13.446000 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-wxp87
	I1114 14:05:13.446011 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:13.446021 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:13.446029 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:13.448439 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:13.448471 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:13.448479 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:13.448486 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:13 GMT
	I1114 14:05:13.448493 1255771 round_trippers.go:580]     Audit-Id: eb149b50-db35-462f-bf06-7f50dd4a7eab
	I1114 14:05:13.448500 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:13.448512 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:13.448519 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:13.448905 1255771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-wxp87","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"51c2bd2a-c15e-4489-ad3b-7ca65e4ec898","resourceVersion":"417","creationTimestamp":"2023-11-14T14:04:22Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"81b2172f-b4bf-4215-976d-efff1994decb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:04:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"81b2172f-b4bf-4215-976d-efff1994decb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I1114 14:05:13.449435 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:05:13.449450 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:13.449459 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:13.449467 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:13.451847 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:13.451927 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:13.451957 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:13.451965 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:13.451971 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:13.451978 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:13 GMT
	I1114 14:05:13.452027 1255771 round_trippers.go:580]     Audit-Id: f050e25e-0257-44e2-b42a-f94b22a16117
	I1114 14:05:13.452041 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:13.452174 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"401","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1114 14:05:13.452605 1255771 pod_ready.go:92] pod "coredns-5dd5756b68-wxp87" in "kube-system" namespace has status "Ready":"True"
	I1114 14:05:13.452625 1255771 pod_ready.go:81] duration metric: took 6.720939ms waiting for pod "coredns-5dd5756b68-wxp87" in "kube-system" namespace to be "Ready" ...
	I1114 14:05:13.452638 1255771 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-683928" in "kube-system" namespace to be "Ready" ...
	I1114 14:05:13.452697 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-683928
	I1114 14:05:13.452708 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:13.452715 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:13.452723 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:13.455179 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:13.455200 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:13.455209 1255771 round_trippers.go:580]     Audit-Id: 8db4a915-edba-46d4-bb94-81b93ee804ec
	I1114 14:05:13.455215 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:13.455222 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:13.455228 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:13.455235 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:13.455241 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:13 GMT
	I1114 14:05:13.455423 1255771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-683928","namespace":"kube-system","uid":"b8abc8dc-45bf-4827-8e3e-3de67a0f0e45","resourceVersion":"389","creationTimestamp":"2023-11-14T14:04:10Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"ec9bb177894ad2f6afba67be57938994","kubernetes.io/config.mirror":"ec9bb177894ad2f6afba67be57938994","kubernetes.io/config.seen":"2023-11-14T14:04:09.837680599Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:04:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I1114 14:05:13.455945 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:05:13.455963 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:13.455972 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:13.455979 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:13.458408 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:13.458436 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:13.458445 1255771 round_trippers.go:580]     Audit-Id: 0d3dbe34-6314-4d37-b2de-dc5734c2353f
	I1114 14:05:13.458452 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:13.458459 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:13.458465 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:13.458475 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:13.458482 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:13 GMT
	I1114 14:05:13.458595 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"401","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1114 14:05:13.458982 1255771 pod_ready.go:92] pod "etcd-multinode-683928" in "kube-system" namespace has status "Ready":"True"
	I1114 14:05:13.458998 1255771 pod_ready.go:81] duration metric: took 6.353024ms waiting for pod "etcd-multinode-683928" in "kube-system" namespace to be "Ready" ...
	I1114 14:05:13.459015 1255771 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-683928" in "kube-system" namespace to be "Ready" ...
	I1114 14:05:13.459077 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-683928
	I1114 14:05:13.459085 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:13.459093 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:13.459100 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:13.461367 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:13.461442 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:13.461456 1255771 round_trippers.go:580]     Audit-Id: 60f41476-f9a0-4b46-91cf-b7214c48919e
	I1114 14:05:13.461464 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:13.461471 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:13.461477 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:13.461505 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:13.461520 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:13 GMT
	I1114 14:05:13.461656 1255771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-683928","namespace":"kube-system","uid":"a4d6bf70-13a0-4603-8504-7497b58f5d76","resourceVersion":"390","creationTimestamp":"2023-11-14T14:04:10Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"f01db491b13fa9a24c600dd08fa8d46d","kubernetes.io/config.mirror":"f01db491b13fa9a24c600dd08fa8d46d","kubernetes.io/config.seen":"2023-11-14T14:04:09.837686146Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:04:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I1114 14:05:13.462219 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:05:13.462239 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:13.462248 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:13.462258 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:13.464455 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:13.464509 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:13.464530 1255771 round_trippers.go:580]     Audit-Id: 71feaed8-e81e-4e1b-aed0-12973d9e8e70
	I1114 14:05:13.464577 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:13.464604 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:13.464627 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:13.464639 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:13.464648 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:13 GMT
	I1114 14:05:13.464753 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"401","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1114 14:05:13.465168 1255771 pod_ready.go:92] pod "kube-apiserver-multinode-683928" in "kube-system" namespace has status "Ready":"True"
	I1114 14:05:13.465187 1255771 pod_ready.go:81] duration metric: took 6.161599ms waiting for pod "kube-apiserver-multinode-683928" in "kube-system" namespace to be "Ready" ...
	I1114 14:05:13.465200 1255771 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-683928" in "kube-system" namespace to be "Ready" ...
	I1114 14:05:13.465263 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-683928
	I1114 14:05:13.465274 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:13.465282 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:13.465289 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:13.467709 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:13.467767 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:13.467799 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:13.467820 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:13 GMT
	I1114 14:05:13.467856 1255771 round_trippers.go:580]     Audit-Id: 4c9a8d79-6441-42f4-ae36-6ee015895480
	I1114 14:05:13.467884 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:13.467905 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:13.467941 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:13.468149 1255771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-683928","namespace":"kube-system","uid":"fe4ca2c2-2dba-4c17-ac7b-a62caa16c5cb","resourceVersion":"391","creationTimestamp":"2023-11-14T14:04:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"992096f1db4796aad2da542d99dc8329","kubernetes.io/config.mirror":"992096f1db4796aad2da542d99dc8329","kubernetes.io/config.seen":"2023-11-14T14:04:09.837687631Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:04:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I1114 14:05:13.468739 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:05:13.468757 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:13.468766 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:13.468774 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:13.471090 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:13.471113 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:13.471122 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:13.471128 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:13.471136 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:13 GMT
	I1114 14:05:13.471142 1255771 round_trippers.go:580]     Audit-Id: e13d22df-c2ca-4251-a36c-437ff0b0d8bf
	I1114 14:05:13.471149 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:13.471159 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:13.471268 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"401","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1114 14:05:13.471677 1255771 pod_ready.go:92] pod "kube-controller-manager-multinode-683928" in "kube-system" namespace has status "Ready":"True"
	I1114 14:05:13.471696 1255771 pod_ready.go:81] duration metric: took 6.485643ms waiting for pod "kube-controller-manager-multinode-683928" in "kube-system" namespace to be "Ready" ...
	I1114 14:05:13.471709 1255771 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vcfc4" in "kube-system" namespace to be "Ready" ...
	I1114 14:05:13.635077 1255771 request.go:629] Waited for 163.293348ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vcfc4
	I1114 14:05:13.635139 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vcfc4
	I1114 14:05:13.635156 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:13.635177 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:13.635185 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:13.637910 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:13.637982 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:13.638004 1255771 round_trippers.go:580]     Audit-Id: cc016991-cc6f-4cb4-b8ed-5fbeaedfdb5a
	I1114 14:05:13.638024 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:13.638062 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:13.638094 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:13.638108 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:13.638115 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:13 GMT
	I1114 14:05:13.638261 1255771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-vcfc4","generateName":"kube-proxy-","namespace":"kube-system","uid":"679e31a8-7e53-42d9-afd5-5b3b18854981","resourceVersion":"374","creationTimestamp":"2023-11-14T14:04:22Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"33f72355-c83b-465e-bd1f-56fb8e339b0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:04:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33f72355-c83b-465e-bd1f-56fb8e339b0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5509 chars]
	I1114 14:05:13.835112 1255771 request.go:629] Waited for 196.351626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:05:13.835182 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:05:13.835192 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:13.835202 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:13.835212 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:13.837807 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:13.837834 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:13.837843 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:13.837850 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:13.837884 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:13.837899 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:13.837910 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:13 GMT
	I1114 14:05:13.837920 1255771 round_trippers.go:580]     Audit-Id: f0f9ab42-c6b6-419d-b6d0-24a02bb9624c
	I1114 14:05:13.838044 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"401","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1114 14:05:13.838441 1255771 pod_ready.go:92] pod "kube-proxy-vcfc4" in "kube-system" namespace has status "Ready":"True"
	I1114 14:05:13.838457 1255771 pod_ready.go:81] duration metric: took 366.738941ms waiting for pod "kube-proxy-vcfc4" in "kube-system" namespace to be "Ready" ...
	I1114 14:05:13.838469 1255771 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zlkdp" in "kube-system" namespace to be "Ready" ...
	I1114 14:05:14.034876 1255771 request.go:629] Waited for 196.336676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zlkdp
	I1114 14:05:14.034958 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zlkdp
	I1114 14:05:14.034971 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:14.034983 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:14.034990 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:14.037658 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:14.037683 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:14.037694 1255771 round_trippers.go:580]     Audit-Id: f5cfce77-96c3-4c64-85d5-f4a081ddc698
	I1114 14:05:14.037701 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:14.037707 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:14.037713 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:14.037720 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:14.037731 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:14 GMT
	I1114 14:05:14.037842 1255771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zlkdp","generateName":"kube-proxy-","namespace":"kube-system","uid":"486be596-15f1-4c07-8169-521999b8e063","resourceVersion":"474","creationTimestamp":"2023-11-14T14:05:11Z","labels":{"controller-revision-hash":"dffc744c9","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"33f72355-c83b-465e-bd1f-56fb8e339b0d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33f72355-c83b-465e-bd1f-56fb8e339b0d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5517 chars]
	I1114 14:05:14.234639 1255771 request.go:629] Waited for 196.287191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-683928-m02
	I1114 14:05:14.234721 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928-m02
	I1114 14:05:14.234737 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:14.234750 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:14.234783 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:14.237249 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:14.237275 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:14.237284 1255771 round_trippers.go:580]     Audit-Id: 1b24d173-1634-42b0-b150-163f54942fdb
	I1114 14:05:14.237290 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:14.237297 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:14.237303 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:14.237336 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:14.237343 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:14 GMT
	I1114 14:05:14.237458 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928-m02","uid":"cc50001c-297d-41af-892c-eebeab7d42ac","resourceVersion":"478","creationTimestamp":"2023-11-14T14:05:11Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:05:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5378 chars]
	I1114 14:05:14.237867 1255771 pod_ready.go:92] pod "kube-proxy-zlkdp" in "kube-system" namespace has status "Ready":"True"
	I1114 14:05:14.237887 1255771 pod_ready.go:81] duration metric: took 399.404434ms waiting for pod "kube-proxy-zlkdp" in "kube-system" namespace to be "Ready" ...
	I1114 14:05:14.237899 1255771 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-683928" in "kube-system" namespace to be "Ready" ...
	I1114 14:05:14.434156 1255771 request.go:629] Waited for 196.188255ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-683928
	I1114 14:05:14.434237 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-683928
	I1114 14:05:14.434268 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:14.434277 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:14.434311 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:14.436939 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:14.436964 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:14.436974 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:14.436981 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:14.436987 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:14.436994 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:14.437003 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:14 GMT
	I1114 14:05:14.437015 1255771 round_trippers.go:580]     Audit-Id: 1023554b-ec50-48d7-a0a3-b9047f5658ef
	I1114 14:05:14.437133 1255771 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-683928","namespace":"kube-system","uid":"21e5e748-a68f-4769-9422-281cee1db8ac","resourceVersion":"388","creationTimestamp":"2023-11-14T14:04:10Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ab73247eb2483e7720424cdc15c19b03","kubernetes.io/config.mirror":"ab73247eb2483e7720424cdc15c19b03","kubernetes.io/config.seen":"2023-11-14T14:04:09.837688697Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-14T14:04:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I1114 14:05:14.634779 1255771 request.go:629] Waited for 197.178148ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:05:14.634867 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-683928
	I1114 14:05:14.634879 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:14.634892 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:14.634922 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:14.637487 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:14.637512 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:14.637522 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:14.637528 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:14.637570 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:14.637577 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:14 GMT
	I1114 14:05:14.637604 1255771 round_trippers.go:580]     Audit-Id: d1f27522-bfbc-416b-b968-c75beadf3a8f
	I1114 14:05:14.637627 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:14.637804 1255771 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"401","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-14T14:04:06Z","fieldsType":"FieldsV1","fiel [truncated 6029 chars]
	I1114 14:05:14.638231 1255771 pod_ready.go:92] pod "kube-scheduler-multinode-683928" in "kube-system" namespace has status "Ready":"True"
	I1114 14:05:14.638251 1255771 pod_ready.go:81] duration metric: took 400.341289ms waiting for pod "kube-scheduler-multinode-683928" in "kube-system" namespace to be "Ready" ...
	I1114 14:05:14.638263 1255771 pod_ready.go:38] duration metric: took 1.200841582s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1114 14:05:14.638281 1255771 system_svc.go:44] waiting for kubelet service to be running ....
	I1114 14:05:14.638337 1255771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:05:14.652937 1255771 system_svc.go:56] duration metric: took 14.648163ms WaitForService to wait for kubelet.
	I1114 14:05:14.652971 1255771 kubeadm.go:581] duration metric: took 2.745898032s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1114 14:05:14.652995 1255771 node_conditions.go:102] verifying NodePressure condition ...
	I1114 14:05:14.834542 1255771 request.go:629] Waited for 181.460568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I1114 14:05:14.834622 1255771 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I1114 14:05:14.834632 1255771 round_trippers.go:469] Request Headers:
	I1114 14:05:14.834663 1255771 round_trippers.go:473]     Accept: application/json, */*
	I1114 14:05:14.834670 1255771 round_trippers.go:473]     User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	I1114 14:05:14.837415 1255771 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1114 14:05:14.837439 1255771 round_trippers.go:577] Response Headers:
	I1114 14:05:14.837448 1255771 round_trippers.go:580]     Audit-Id: 6cc12992-4782-489e-967e-2c6b25da2bcd
	I1114 14:05:14.837455 1255771 round_trippers.go:580]     Cache-Control: no-cache, private
	I1114 14:05:14.837491 1255771 round_trippers.go:580]     Content-Type: application/json
	I1114 14:05:14.837504 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 764467e0-836d-47ce-831d-2ef638b88710
	I1114 14:05:14.837516 1255771 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 6dc4c8e9-9a26-40c3-b783-d68c96137fbf
	I1114 14:05:14.837522 1255771 round_trippers.go:580]     Date: Tue, 14 Nov 2023 14:05:14 GMT
	I1114 14:05:14.837736 1255771 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"479"},"items":[{"metadata":{"name":"multinode-683928","uid":"50283084-c548-4846-a7bb-71ebf6b7240c","resourceVersion":"401","creationTimestamp":"2023-11-14T14:04:07Z","labels":{"beta.kubernetes.io/arch":"arm64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"arm64","kubernetes.io/hostname":"multinode-683928","kubernetes.io/os":"linux","minikube.k8s.io/commit":"6d8573efb5a7770e21024de23a29d810b200278b","minikube.k8s.io/name":"multinode-683928","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_14T14_04_10_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12452 chars]
	I1114 14:05:14.838415 1255771 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1114 14:05:14.838439 1255771 node_conditions.go:123] node cpu capacity is 2
	I1114 14:05:14.838450 1255771 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1114 14:05:14.838455 1255771 node_conditions.go:123] node cpu capacity is 2
	I1114 14:05:14.838463 1255771 node_conditions.go:105] duration metric: took 185.46338ms to run NodePressure ...
	I1114 14:05:14.838476 1255771 start.go:228] waiting for startup goroutines ...
	I1114 14:05:14.838503 1255771 start.go:242] writing updated cluster config ...
	I1114 14:05:14.838827 1255771 ssh_runner.go:195] Run: rm -f paused
	I1114 14:05:14.899807 1255771 start.go:600] kubectl: 1.28.3, cluster: 1.28.3 (minor skew: 0)
	I1114 14:05:14.902059 1255771 out.go:177] * Done! kubectl is now configured to use "multinode-683928" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Nov 14 14:04:54 multinode-683928 crio[906]: time="2023-11-14 14:04:54.421721902Z" level=info msg="Starting container: dfd52c8cf5f184af7eced0438f20a1ad8fb31074170fee344d11a6eb7eb618ff" id=68a7175a-0c60-4500-9d2b-9e8210c057a8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 14 14:04:54 multinode-683928 crio[906]: time="2023-11-14 14:04:54.425869304Z" level=info msg="Created container 2c5c3cb29f8214b4ab268840863d6096b38246e289e2a875708d1478ae1936c3: kube-system/coredns-5dd5756b68-wxp87/coredns" id=f524e603-9197-4e15-9e86-2fde697cc644 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 14 14:04:54 multinode-683928 crio[906]: time="2023-11-14 14:04:54.426822290Z" level=info msg="Starting container: 2c5c3cb29f8214b4ab268840863d6096b38246e289e2a875708d1478ae1936c3" id=58a04f3a-9253-4c3b-92ff-408f2d72ff51 name=/runtime.v1.RuntimeService/StartContainer
	Nov 14 14:04:54 multinode-683928 crio[906]: time="2023-11-14 14:04:54.439656821Z" level=info msg="Started container" PID=1953 containerID=dfd52c8cf5f184af7eced0438f20a1ad8fb31074170fee344d11a6eb7eb618ff description=kube-system/storage-provisioner/storage-provisioner id=68a7175a-0c60-4500-9d2b-9e8210c057a8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dcc353b4ad3559efca30bbe1fc588042c6021bd3c444996d307c954cacf223ed
	Nov 14 14:04:54 multinode-683928 crio[906]: time="2023-11-14 14:04:54.446374412Z" level=info msg="Started container" PID=1959 containerID=2c5c3cb29f8214b4ab268840863d6096b38246e289e2a875708d1478ae1936c3 description=kube-system/coredns-5dd5756b68-wxp87/coredns id=58a04f3a-9253-4c3b-92ff-408f2d72ff51 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3d360e58d77b7aef3b24a524a4c533bee6864a9f484d8eaf953a7e1ecc2659c3
	Nov 14 14:05:16 multinode-683928 crio[906]: time="2023-11-14 14:05:16.120690802Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-vf6zm/POD" id=faf1a403-2dd9-419c-83cb-2d4d9b3720d8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 14 14:05:16 multinode-683928 crio[906]: time="2023-11-14 14:05:16.120763097Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 14 14:05:16 multinode-683928 crio[906]: time="2023-11-14 14:05:16.139469661Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-vf6zm Namespace:default ID:f9a2aa5accb8b9d67b42e520b6378c5f3059fd434a851367d02ac29607fc95a6 UID:5f937850-8597-44b5-97da-ae39b53259e6 NetNS:/var/run/netns/a2b2bbf8-13d9-476b-abd8-c01efe1c3e28 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 14 14:05:16 multinode-683928 crio[906]: time="2023-11-14 14:05:16.139511187Z" level=info msg="Adding pod default_busybox-5bc68d56bd-vf6zm to CNI network \"kindnet\" (type=ptp)"
	Nov 14 14:05:16 multinode-683928 crio[906]: time="2023-11-14 14:05:16.155412699Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-vf6zm Namespace:default ID:f9a2aa5accb8b9d67b42e520b6378c5f3059fd434a851367d02ac29607fc95a6 UID:5f937850-8597-44b5-97da-ae39b53259e6 NetNS:/var/run/netns/a2b2bbf8-13d9-476b-abd8-c01efe1c3e28 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Nov 14 14:05:16 multinode-683928 crio[906]: time="2023-11-14 14:05:16.155593030Z" level=info msg="Checking pod default_busybox-5bc68d56bd-vf6zm for CNI network kindnet (type=ptp)"
	Nov 14 14:05:16 multinode-683928 crio[906]: time="2023-11-14 14:05:16.175333294Z" level=info msg="Ran pod sandbox f9a2aa5accb8b9d67b42e520b6378c5f3059fd434a851367d02ac29607fc95a6 with infra container: default/busybox-5bc68d56bd-vf6zm/POD" id=faf1a403-2dd9-419c-83cb-2d4d9b3720d8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 14 14:05:16 multinode-683928 crio[906]: time="2023-11-14 14:05:16.176433939Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=57bae342-1d14-4e2f-8c6b-572262d03d3a name=/runtime.v1.ImageService/ImageStatus
	Nov 14 14:05:16 multinode-683928 crio[906]: time="2023-11-14 14:05:16.176687484Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=57bae342-1d14-4e2f-8c6b-572262d03d3a name=/runtime.v1.ImageService/ImageStatus
	Nov 14 14:05:16 multinode-683928 crio[906]: time="2023-11-14 14:05:16.177672257Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=7f31c266-09a6-4385-aa00-94213bb7b651 name=/runtime.v1.ImageService/PullImage
	Nov 14 14:05:16 multinode-683928 crio[906]: time="2023-11-14 14:05:16.179258592Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 14 14:05:17 multinode-683928 crio[906]: time="2023-11-14 14:05:17.078864157Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Nov 14 14:05:18 multinode-683928 crio[906]: time="2023-11-14 14:05:18.637657252Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3" id=7f31c266-09a6-4385-aa00-94213bb7b651 name=/runtime.v1.ImageService/PullImage
	Nov 14 14:05:18 multinode-683928 crio[906]: time="2023-11-14 14:05:18.639212901Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=172c12ad-cd3d-4c4f-966c-c2089a5f0c00 name=/runtime.v1.ImageService/ImageStatus
	Nov 14 14:05:18 multinode-683928 crio[906]: time="2023-11-14 14:05:18.639944572Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1496796,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=172c12ad-cd3d-4c4f-966c-c2089a5f0c00 name=/runtime.v1.ImageService/ImageStatus
	Nov 14 14:05:18 multinode-683928 crio[906]: time="2023-11-14 14:05:18.641378260Z" level=info msg="Creating container: default/busybox-5bc68d56bd-vf6zm/busybox" id=421bc884-2659-4473-9769-23905af67236 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 14 14:05:18 multinode-683928 crio[906]: time="2023-11-14 14:05:18.641506325Z" level=warning msg="Allowed annotations are specified for workload []"
	Nov 14 14:05:18 multinode-683928 crio[906]: time="2023-11-14 14:05:18.720122283Z" level=info msg="Created container fb7a286fa9f65eebeb3d80f0edab6f81352b87b5b70b5242d5cf475d60d63560: default/busybox-5bc68d56bd-vf6zm/busybox" id=421bc884-2659-4473-9769-23905af67236 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 14 14:05:18 multinode-683928 crio[906]: time="2023-11-14 14:05:18.720898754Z" level=info msg="Starting container: fb7a286fa9f65eebeb3d80f0edab6f81352b87b5b70b5242d5cf475d60d63560" id=3de9a259-6153-48e5-9aa2-8120d3c9f80a name=/runtime.v1.RuntimeService/StartContainer
	Nov 14 14:05:18 multinode-683928 crio[906]: time="2023-11-14 14:05:18.730115684Z" level=info msg="Started container" PID=2092 containerID=fb7a286fa9f65eebeb3d80f0edab6f81352b87b5b70b5242d5cf475d60d63560 description=default/busybox-5bc68d56bd-vf6zm/busybox id=3de9a259-6153-48e5-9aa2-8120d3c9f80a name=/runtime.v1.RuntimeService/StartContainer sandboxID=f9a2aa5accb8b9d67b42e520b6378c5f3059fd434a851367d02ac29607fc95a6
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	fb7a286fa9f65       gcr.io/k8s-minikube/busybox@sha256:859d41e4316c182cb559f9ae3c5ffcac8602ee1179794a1707c06cd092a008d3   5 seconds ago        Running             busybox                   0                   f9a2aa5accb8b       busybox-5bc68d56bd-vf6zm
	2c5c3cb29f821       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      29 seconds ago       Running             coredns                   0                   3d360e58d77b7       coredns-5dd5756b68-wxp87
	dfd52c8cf5f18       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      29 seconds ago       Running             storage-provisioner       0                   dcc353b4ad355       storage-provisioner
	c751003ba85a3       a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd                                      About a minute ago   Running             kube-proxy                0                   7aad65f3fa85d       kube-proxy-vcfc4
	00f02d313334d       04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26                                      About a minute ago   Running             kindnet-cni               0                   27f50a338a703       kindnet-sgvbn
	a2d922a4c9b5a       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      About a minute ago   Running             etcd                      0                   1db9d05e1aabc       etcd-multinode-683928
	6e75957e12030       8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16                                      About a minute ago   Running             kube-controller-manager   0                   43ea7810f7251       kube-controller-manager-multinode-683928
	2127a7129c769       42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314                                      About a minute ago   Running             kube-scheduler            0                   21e01776d20c7       kube-scheduler-multinode-683928
	90797a3e0e930       537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7                                      About a minute ago   Running             kube-apiserver            0                   c86a7f6409f78       kube-apiserver-multinode-683928
	
	* 
	* ==> coredns [2c5c3cb29f8214b4ab268840863d6096b38246e289e2a875708d1478ae1936c3] <==
	* [INFO] 10.244.1.2:49431 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094785s
	[INFO] 10.244.0.3:45119 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109111s
	[INFO] 10.244.0.3:48085 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001129428s
	[INFO] 10.244.0.3:56124 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000058141s
	[INFO] 10.244.0.3:59777 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000042355s
	[INFO] 10.244.0.3:48681 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.007878388s
	[INFO] 10.244.0.3:55729 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000064164s
	[INFO] 10.244.0.3:47330 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000048082s
	[INFO] 10.244.0.3:48427 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000046023s
	[INFO] 10.244.1.2:44768 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127876s
	[INFO] 10.244.1.2:47408 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000112499s
	[INFO] 10.244.1.2:41569 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000083774s
	[INFO] 10.244.1.2:40683 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070367s
	[INFO] 10.244.0.3:50135 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142538s
	[INFO] 10.244.0.3:48496 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100102s
	[INFO] 10.244.0.3:57148 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071885s
	[INFO] 10.244.0.3:56420 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000063721s
	[INFO] 10.244.1.2:39228 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000108471s
	[INFO] 10.244.1.2:44786 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000095368s
	[INFO] 10.244.1.2:58252 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000087343s
	[INFO] 10.244.1.2:50821 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000083274s
	[INFO] 10.244.0.3:34565 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011922s
	[INFO] 10.244.0.3:50280 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000080484s
	[INFO] 10.244.0.3:59369 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000047491s
	[INFO] 10.244.0.3:49307 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000055705s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-683928
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-683928
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6d8573efb5a7770e21024de23a29d810b200278b
	                    minikube.k8s.io/name=multinode-683928
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_14T14_04_10_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 14:04:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-683928
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 14:05:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 14:04:53 +0000   Tue, 14 Nov 2023 14:04:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 14:04:53 +0000   Tue, 14 Nov 2023 14:04:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 14:04:53 +0000   Tue, 14 Nov 2023 14:04:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 14:04:53 +0000   Tue, 14 Nov 2023 14:04:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-683928
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 09da29c5a4f94c188c83072ca11d8c1e
	  System UUID:                da51a73d-80de-43d5-823e-150d613829c6
	  Boot ID:                    3bdb9c53-2d63-44b9-be60-6ff1ad471e35
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-vf6zm                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-wxp87                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     61s
	  kube-system                 etcd-multinode-683928                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         73s
	  kube-system                 kindnet-sgvbn                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      61s
	  kube-system                 kube-apiserver-multinode-683928             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-controller-manager-multinode-683928    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-proxy-vcfc4                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-scheduler-multinode-683928             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  100m (5%!)(MISSING)
	  memory             220Mi (2%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 60s   kube-proxy       
	  Normal  Starting                 74s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  74s   kubelet          Node multinode-683928 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s   kubelet          Node multinode-683928 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s   kubelet          Node multinode-683928 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           61s   node-controller  Node multinode-683928 event: Registered Node multinode-683928 in Controller
	  Normal  NodeReady                30s   kubelet          Node multinode-683928 status is now: NodeReady
	
	
	Name:               multinode-683928-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=multinode-683928-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 14 Nov 2023 14:05:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-683928-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 14 Nov 2023 14:05:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 14 Nov 2023 14:05:13 +0000   Tue, 14 Nov 2023 14:05:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 14 Nov 2023 14:05:13 +0000   Tue, 14 Nov 2023 14:05:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 14 Nov 2023 14:05:13 +0000   Tue, 14 Nov 2023 14:05:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 14 Nov 2023 14:05:13 +0000   Tue, 14 Nov 2023 14:05:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-683928-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022496Ki
	  pods:               110
	System Info:
	  Machine ID:                 de63c9c4aa7d47329e77940884a27341
	  System UUID:                b4bd60bd-123e-4eab-9de7-708fe3eb0e05
	  Boot ID:                    3bdb9c53-2d63-44b9-be60-6ff1ad471e35
	  Kernel Version:             5.15.0-1049-aws
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.3
	  Kube-Proxy Version:         v1.28.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-rl6d4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-hxsnz               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13s
	  kube-system                 kube-proxy-zlkdp            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11s                kube-proxy       
	  Normal  NodeHasSufficientMemory  13s (x5 over 15s)  kubelet          Node multinode-683928-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x5 over 15s)  kubelet          Node multinode-683928-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x5 over 15s)  kubelet          Node multinode-683928-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12s                node-controller  Node multinode-683928-m02 event: Registered Node multinode-683928-m02 in Controller
	  Normal  NodeReady                11s                kubelet          Node multinode-683928-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.001143] FS-Cache: O-key=[8] '84643b0000000000'
	[  +0.000762] FS-Cache: N-cookie c=00000066 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000999] FS-Cache: N-cookie d=00000000fbc4fe34{9p.inode} n=000000002244812d
	[  +0.001146] FS-Cache: N-key=[8] '84643b0000000000'
	[  +0.003454] FS-Cache: Duplicate cookie detected
	[  +0.000756] FS-Cache: O-cookie c=0000005f [p=0000005d fl=226 nc=0 na=1]
	[  +0.001057] FS-Cache: O-cookie d=00000000fbc4fe34{9p.inode} n=00000000ecb0ec67
	[  +0.001110] FS-Cache: O-key=[8] '84643b0000000000'
	[  +0.000749] FS-Cache: N-cookie c=00000067 [p=0000005d fl=2 nc=0 na=1]
	[  +0.001022] FS-Cache: N-cookie d=00000000fbc4fe34{9p.inode} n=0000000038574f41
	[  +0.001139] FS-Cache: N-key=[8] '84643b0000000000'
	[  +3.132585] FS-Cache: Duplicate cookie detected
	[  +0.000755] FS-Cache: O-cookie c=0000005e [p=0000005d fl=226 nc=0 na=1]
	[  +0.001032] FS-Cache: O-cookie d=00000000fbc4fe34{9p.inode} n=00000000e83a4aa7
	[  +0.001160] FS-Cache: O-key=[8] '83643b0000000000'
	[  +0.000753] FS-Cache: N-cookie c=00000069 [p=0000005d fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=00000000fbc4fe34{9p.inode} n=000000002244812d
	[  +0.001111] FS-Cache: N-key=[8] '83643b0000000000'
	[  +0.323161] FS-Cache: Duplicate cookie detected
	[  +0.000805] FS-Cache: O-cookie c=00000063 [p=0000005d fl=226 nc=0 na=1]
	[  +0.001104] FS-Cache: O-cookie d=00000000fbc4fe34{9p.inode} n=0000000060b8cdea
	[  +0.001286] FS-Cache: O-key=[8] '89643b0000000000'
	[  +0.000771] FS-Cache: N-cookie c=0000006a [p=0000005d fl=2 nc=0 na=1]
	[  +0.001023] FS-Cache: N-cookie d=00000000fbc4fe34{9p.inode} n=00000000495e4eb3
	[  +0.001223] FS-Cache: N-key=[8] '89643b0000000000'
	
	* 
	* ==> etcd [a2d922a4c9b5ae1a06eb126270ae2c200798d420f04812d8661f10d8cf169a8d] <==
	* {"level":"info","ts":"2023-11-14T14:04:01.992421Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-14T14:04:01.992457Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-14T14:04:01.992466Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-14T14:04:01.993032Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-11-14T14:04:01.993057Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-11-14T14:04:02.000694Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-11-14T14:04:02.000827Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-11-14T14:04:02.664164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-14T14:04:02.664278Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-14T14:04:02.664331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-11-14T14:04:02.664374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-11-14T14:04:02.664405Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-11-14T14:04:02.664442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-11-14T14:04:02.664478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-11-14T14:04:02.668702Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T14:04:02.672795Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-683928 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-14T14:04:02.674734Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T14:04:02.674859Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T14:04:02.674913Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-14T14:04:02.674975Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T14:04:02.676909Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-11-14T14:04:02.68303Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-14T14:04:02.684096Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-14T14:04:02.716574Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-14T14:04:02.716616Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  14:05:24 up 10:47,  0 users,  load average: 1.37, 2.06, 1.53
	Linux multinode-683928 5.15.0-1049-aws #54~20.04.1-Ubuntu SMP Fri Oct 6 22:07:16 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [00f02d313334d88224dc21e161c9f989dd4ca606320d71a4d6fb9692e492797f] <==
	* I1114 14:04:23.308847       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1114 14:04:23.309182       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I1114 14:04:23.309399       1 main.go:116] setting mtu 1500 for CNI 
	I1114 14:04:23.309471       1 main.go:146] kindnetd IP family: "ipv4"
	I1114 14:04:23.309549       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1114 14:04:53.601318       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I1114 14:04:53.615491       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1114 14:04:53.615521       1 main.go:227] handling current node
	I1114 14:05:03.633096       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1114 14:05:03.633128       1 main.go:227] handling current node
	I1114 14:05:13.646063       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1114 14:05:13.646091       1 main.go:227] handling current node
	I1114 14:05:13.646103       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1114 14:05:13.646109       1 main.go:250] Node multinode-683928-m02 has CIDR [10.244.1.0/24] 
	I1114 14:05:13.646273       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I1114 14:05:23.657101       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I1114 14:05:23.657131       1 main.go:227] handling current node
	I1114 14:05:23.657142       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I1114 14:05:23.657148       1 main.go:250] Node multinode-683928-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [90797a3e0e930936a9e28981c3fdc1d9d6af3d8a0a27c6cf8c6fc70e4d788473] <==
	* I1114 14:04:07.031563       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1114 14:04:07.031570       1 cache.go:39] Caches are synced for autoregister controller
	I1114 14:04:07.036950       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1114 14:04:07.037011       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1114 14:04:07.037028       1 shared_informer.go:318] Caches are synced for configmaps
	I1114 14:04:07.040822       1 controller.go:624] quota admission added evaluator for: namespaces
	I1114 14:04:07.041772       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1114 14:04:07.042373       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1114 14:04:07.043294       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1114 14:04:07.222734       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1114 14:04:07.746412       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1114 14:04:07.751085       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1114 14:04:07.751108       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1114 14:04:08.281723       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1114 14:04:08.331178       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1114 14:04:08.474179       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1114 14:04:08.482187       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I1114 14:04:08.483287       1 controller.go:624] quota admission added evaluator for: endpoints
	I1114 14:04:08.487801       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1114 14:04:08.922396       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1114 14:04:09.736746       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1114 14:04:09.748722       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1114 14:04:09.761958       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1114 14:04:22.481050       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1114 14:04:22.530511       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [6e75957e1203049bc42069624467b0cb6eda314a58a0b39cafe8543183305c33] <==
	* I1114 14:04:23.325190       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="26.063659ms"
	I1114 14:04:23.325477       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.946µs"
	I1114 14:04:53.965093       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.39µs"
	I1114 14:04:53.983964       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.656µs"
	I1114 14:04:55.090695       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.318889ms"
	I1114 14:04:55.090905       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.52µs"
	I1114 14:04:57.216677       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1114 14:05:11.262355       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-683928-m02\" does not exist"
	I1114 14:05:11.284402       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-683928-m02" podCIDRs=["10.244.1.0/24"]
	I1114 14:05:11.293647       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zlkdp"
	I1114 14:05:11.293678       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-hxsnz"
	I1114 14:05:12.218606       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-683928-m02"
	I1114 14:05:12.218663       1 event.go:307] "Event occurred" object="multinode-683928-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-683928-m02 event: Registered Node multinode-683928-m02 in Controller"
	I1114 14:05:13.127846       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-683928-m02"
	I1114 14:05:15.758589       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1114 14:05:15.783928       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-rl6d4"
	I1114 14:05:15.799883       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-vf6zm"
	I1114 14:05:15.824053       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="66.137413ms"
	I1114 14:05:15.877718       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="53.513852ms"
	I1114 14:05:15.877883       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="49.419µs"
	I1114 14:05:17.229465       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-rl6d4" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-rl6d4"
	I1114 14:05:18.825032       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="4.680092ms"
	I1114 14:05:18.825904       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="39.368µs"
	I1114 14:05:19.128037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.517154ms"
	I1114 14:05:19.128512       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="62.72µs"
	
	* 
	* ==> kube-proxy [c751003ba85a38a05a6b01fc2a71c2960bf52163528e0155660de101ae1c6181] <==
	* I1114 14:04:23.483489       1 server_others.go:69] "Using iptables proxy"
	I1114 14:04:23.708352       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I1114 14:04:23.837607       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1114 14:04:23.861730       1 server_others.go:152] "Using iptables Proxier"
	I1114 14:04:23.862887       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1114 14:04:23.862966       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1114 14:04:23.863095       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1114 14:04:23.864995       1 server.go:846] "Version info" version="v1.28.3"
	I1114 14:04:23.865985       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1114 14:04:23.866848       1 config.go:188] "Starting service config controller"
	I1114 14:04:23.873534       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1114 14:04:23.873669       1 config.go:97] "Starting endpoint slice config controller"
	I1114 14:04:23.873706       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1114 14:04:23.874294       1 config.go:315] "Starting node config controller"
	I1114 14:04:23.906437       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1114 14:04:23.974428       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1114 14:04:23.974514       1 shared_informer.go:318] Caches are synced for service config
	I1114 14:04:24.006720       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2127a7129c7698d238d05287c7e60f5ef2e85c2c959de12eba91c1ab5f6b4a9d] <==
	* W1114 14:04:06.983503       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1114 14:04:06.983633       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1114 14:04:06.983979       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1114 14:04:06.984054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1114 14:04:06.984164       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1114 14:04:06.984217       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1114 14:04:06.984316       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1114 14:04:06.984365       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1114 14:04:06.984445       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1114 14:04:06.984480       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1114 14:04:06.984997       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1114 14:04:06.985068       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1114 14:04:07.802782       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1114 14:04:07.802908       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1114 14:04:07.818717       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1114 14:04:07.818820       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1114 14:04:07.848840       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1114 14:04:07.848942       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1114 14:04:07.891120       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1114 14:04:07.891249       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1114 14:04:07.927699       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1114 14:04:07.927815       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1114 14:04:07.981601       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1114 14:04:07.981717       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1114 14:04:10.672907       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Nov 14 14:04:22 multinode-683928 kubelet[1407]: I1114 14:04:22.655731    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/679e31a8-7e53-42d9-afd5-5b3b18854981-xtables-lock\") pod \"kube-proxy-vcfc4\" (UID: \"679e31a8-7e53-42d9-afd5-5b3b18854981\") " pod="kube-system/kube-proxy-vcfc4"
	Nov 14 14:04:22 multinode-683928 kubelet[1407]: I1114 14:04:22.655784    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/679e31a8-7e53-42d9-afd5-5b3b18854981-lib-modules\") pod \"kube-proxy-vcfc4\" (UID: \"679e31a8-7e53-42d9-afd5-5b3b18854981\") " pod="kube-system/kube-proxy-vcfc4"
	Nov 14 14:04:22 multinode-683928 kubelet[1407]: I1114 14:04:22.655819    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/679e31a8-7e53-42d9-afd5-5b3b18854981-kube-proxy\") pod \"kube-proxy-vcfc4\" (UID: \"679e31a8-7e53-42d9-afd5-5b3b18854981\") " pod="kube-system/kube-proxy-vcfc4"
	Nov 14 14:04:22 multinode-683928 kubelet[1407]: I1114 14:04:22.655845    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c963530-9d71-4472-afb4-b6a45c1b8186-lib-modules\") pod \"kindnet-sgvbn\" (UID: \"7c963530-9d71-4472-afb4-b6a45c1b8186\") " pod="kube-system/kindnet-sgvbn"
	Nov 14 14:04:22 multinode-683928 kubelet[1407]: I1114 14:04:22.655870    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r9gp\" (UniqueName: \"kubernetes.io/projected/7c963530-9d71-4472-afb4-b6a45c1b8186-kube-api-access-4r9gp\") pod \"kindnet-sgvbn\" (UID: \"7c963530-9d71-4472-afb4-b6a45c1b8186\") " pod="kube-system/kindnet-sgvbn"
	Nov 14 14:04:22 multinode-683928 kubelet[1407]: I1114 14:04:22.655897    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxfdc\" (UniqueName: \"kubernetes.io/projected/679e31a8-7e53-42d9-afd5-5b3b18854981-kube-api-access-gxfdc\") pod \"kube-proxy-vcfc4\" (UID: \"679e31a8-7e53-42d9-afd5-5b3b18854981\") " pod="kube-system/kube-proxy-vcfc4"
	Nov 14 14:04:22 multinode-683928 kubelet[1407]: I1114 14:04:22.655922    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7c963530-9d71-4472-afb4-b6a45c1b8186-cni-cfg\") pod \"kindnet-sgvbn\" (UID: \"7c963530-9d71-4472-afb4-b6a45c1b8186\") " pod="kube-system/kindnet-sgvbn"
	Nov 14 14:04:22 multinode-683928 kubelet[1407]: I1114 14:04:22.655944    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c963530-9d71-4472-afb4-b6a45c1b8186-xtables-lock\") pod \"kindnet-sgvbn\" (UID: \"7c963530-9d71-4472-afb4-b6a45c1b8186\") " pod="kube-system/kindnet-sgvbn"
	Nov 14 14:04:22 multinode-683928 kubelet[1407]: W1114 14:04:22.906458    1407 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/95780648ef67ea835cd8638bb1ad39dc71166d07c9ffffe13531b9d9cc13b597/crio-7aad65f3fa85dd2a7b96eda0f3df7d971115cd990d5a1c541eb515145990c039 WatchSource:0}: Error finding container 7aad65f3fa85dd2a7b96eda0f3df7d971115cd990d5a1c541eb515145990c039: Status 404 returned error can't find the container with id 7aad65f3fa85dd2a7b96eda0f3df7d971115cd990d5a1c541eb515145990c039
	Nov 14 14:04:24 multinode-683928 kubelet[1407]: I1114 14:04:24.042537    1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-vcfc4" podStartSLOduration=2.042491569 podCreationTimestamp="2023-11-14 14:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-14 14:04:24.011556362 +0000 UTC m=+14.306162620" watchObservedRunningTime="2023-11-14 14:04:24.042491569 +0000 UTC m=+14.337097827"
	Nov 14 14:04:29 multinode-683928 kubelet[1407]: I1114 14:04:29.876651    1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-sgvbn" podStartSLOduration=7.876609997 podCreationTimestamp="2023-11-14 14:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-14 14:04:24.045736963 +0000 UTC m=+14.340343212" watchObservedRunningTime="2023-11-14 14:04:29.876609997 +0000 UTC m=+20.171216263"
	Nov 14 14:04:53 multinode-683928 kubelet[1407]: I1114 14:04:53.923174    1407 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 14 14:04:53 multinode-683928 kubelet[1407]: I1114 14:04:53.964091    1407 topology_manager.go:215] "Topology Admit Handler" podUID="51c2bd2a-c15e-4489-ad3b-7ca65e4ec898" podNamespace="kube-system" podName="coredns-5dd5756b68-wxp87"
	Nov 14 14:04:53 multinode-683928 kubelet[1407]: I1114 14:04:53.970252    1407 topology_manager.go:215] "Topology Admit Handler" podUID="5444133d-cc06-4053-afe8-529d67cee17e" podNamespace="kube-system" podName="storage-provisioner"
	Nov 14 14:04:53 multinode-683928 kubelet[1407]: I1114 14:04:53.997746    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/51c2bd2a-c15e-4489-ad3b-7ca65e4ec898-config-volume\") pod \"coredns-5dd5756b68-wxp87\" (UID: \"51c2bd2a-c15e-4489-ad3b-7ca65e4ec898\") " pod="kube-system/coredns-5dd5756b68-wxp87"
	Nov 14 14:04:53 multinode-683928 kubelet[1407]: I1114 14:04:53.997798    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqq2r\" (UniqueName: \"kubernetes.io/projected/51c2bd2a-c15e-4489-ad3b-7ca65e4ec898-kube-api-access-cqq2r\") pod \"coredns-5dd5756b68-wxp87\" (UID: \"51c2bd2a-c15e-4489-ad3b-7ca65e4ec898\") " pod="kube-system/coredns-5dd5756b68-wxp87"
	Nov 14 14:04:53 multinode-683928 kubelet[1407]: I1114 14:04:53.997827    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcx7n\" (UniqueName: \"kubernetes.io/projected/5444133d-cc06-4053-afe8-529d67cee17e-kube-api-access-dcx7n\") pod \"storage-provisioner\" (UID: \"5444133d-cc06-4053-afe8-529d67cee17e\") " pod="kube-system/storage-provisioner"
	Nov 14 14:04:53 multinode-683928 kubelet[1407]: I1114 14:04:53.997851    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5444133d-cc06-4053-afe8-529d67cee17e-tmp\") pod \"storage-provisioner\" (UID: \"5444133d-cc06-4053-afe8-529d67cee17e\") " pod="kube-system/storage-provisioner"
	Nov 14 14:04:54 multinode-683928 kubelet[1407]: W1114 14:04:54.318915    1407 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/95780648ef67ea835cd8638bb1ad39dc71166d07c9ffffe13531b9d9cc13b597/crio-dcc353b4ad3559efca30bbe1fc588042c6021bd3c444996d307c954cacf223ed WatchSource:0}: Error finding container dcc353b4ad3559efca30bbe1fc588042c6021bd3c444996d307c954cacf223ed: Status 404 returned error can't find the container with id dcc353b4ad3559efca30bbe1fc588042c6021bd3c444996d307c954cacf223ed
	Nov 14 14:04:54 multinode-683928 kubelet[1407]: W1114 14:04:54.319552    1407 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/95780648ef67ea835cd8638bb1ad39dc71166d07c9ffffe13531b9d9cc13b597/crio-3d360e58d77b7aef3b24a524a4c533bee6864a9f484d8eaf953a7e1ecc2659c3 WatchSource:0}: Error finding container 3d360e58d77b7aef3b24a524a4c533bee6864a9f484d8eaf953a7e1ecc2659c3: Status 404 returned error can't find the container with id 3d360e58d77b7aef3b24a524a4c533bee6864a9f484d8eaf953a7e1ecc2659c3
	Nov 14 14:04:55 multinode-683928 kubelet[1407]: I1114 14:04:55.077166    1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=31.077120332 podCreationTimestamp="2023-11-14 14:04:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-14 14:04:55.065289462 +0000 UTC m=+45.359895712" watchObservedRunningTime="2023-11-14 14:04:55.077120332 +0000 UTC m=+45.371726590"
	Nov 14 14:04:55 multinode-683928 kubelet[1407]: I1114 14:04:55.077518    1407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-wxp87" podStartSLOduration=33.077495681 podCreationTimestamp="2023-11-14 14:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-14 14:04:55.077258398 +0000 UTC m=+45.371864648" watchObservedRunningTime="2023-11-14 14:04:55.077495681 +0000 UTC m=+45.372101939"
	Nov 14 14:05:15 multinode-683928 kubelet[1407]: I1114 14:05:15.818891    1407 topology_manager.go:215] "Topology Admit Handler" podUID="5f937850-8597-44b5-97da-ae39b53259e6" podNamespace="default" podName="busybox-5bc68d56bd-vf6zm"
	Nov 14 14:05:15 multinode-683928 kubelet[1407]: I1114 14:05:15.855644    1407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z422z\" (UniqueName: \"kubernetes.io/projected/5f937850-8597-44b5-97da-ae39b53259e6-kube-api-access-z422z\") pod \"busybox-5bc68d56bd-vf6zm\" (UID: \"5f937850-8597-44b5-97da-ae39b53259e6\") " pod="default/busybox-5bc68d56bd-vf6zm"
	Nov 14 14:05:16 multinode-683928 kubelet[1407]: W1114 14:05:16.173236    1407 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/95780648ef67ea835cd8638bb1ad39dc71166d07c9ffffe13531b9d9cc13b597/crio-f9a2aa5accb8b9d67b42e520b6378c5f3059fd434a851367d02ac29607fc95a6 WatchSource:0}: Error finding container f9a2aa5accb8b9d67b42e520b6378c5f3059fd434a851367d02ac29607fc95a6: Status 404 returned error can't find the container with id f9a2aa5accb8b9d67b42e520b6378c5f3059fd434a851367d02ac29607fc95a6
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p multinode-683928 -n multinode-683928
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-683928 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.25s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (77.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.17.0.1899494093.exe start -p running-upgrade-042371 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1114 14:20:20.964225 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.17.0.1899494093.exe start -p running-upgrade-042371 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m9.548953608s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-042371 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p running-upgrade-042371 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (3.607774664s)

                                                
                                                
-- stdout --
	* [running-upgrade-042371] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-042371 in cluster running-upgrade-042371
	* Pulling base image ...
	* Updating the running docker "running-upgrade-042371" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 14:20:52.588861 1316169 out.go:296] Setting OutFile to fd 1 ...
	I1114 14:20:52.589035 1316169 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:20:52.589047 1316169 out.go:309] Setting ErrFile to fd 2...
	I1114 14:20:52.589054 1316169 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:20:52.589330 1316169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
	I1114 14:20:52.589737 1316169 out.go:303] Setting JSON to false
	I1114 14:20:52.590865 1316169 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39799,"bootTime":1699931854,"procs":312,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1114 14:20:52.590947 1316169 start.go:138] virtualization:  
	I1114 14:20:52.593800 1316169 out.go:177] * [running-upgrade-042371] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1114 14:20:52.595673 1316169 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 14:20:52.597553 1316169 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 14:20:52.595819 1316169 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1114 14:20:52.595863 1316169 notify.go:220] Checking for updates...
	I1114 14:20:52.599651 1316169 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 14:20:52.601658 1316169 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	I1114 14:20:52.603680 1316169 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1114 14:20:52.605468 1316169 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 14:20:52.607742 1316169 config.go:182] Loaded profile config "running-upgrade-042371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1114 14:20:52.610080 1316169 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1114 14:20:52.612041 1316169 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 14:20:52.649886 1316169 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1114 14:20:52.650001 1316169 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 14:20:52.756872 1316169 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:54 SystemTime:2023-11-14 14:20:52.745881645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 14:20:52.756996 1316169 docker.go:295] overlay module found
	I1114 14:20:52.758912 1316169 out.go:177] * Using the docker driver based on existing profile
	I1114 14:20:52.760734 1316169 start.go:298] selected driver: docker
	I1114 14:20:52.760750 1316169 start.go:902] validating driver "docker" against &{Name:running-upgrade-042371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-042371 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.132 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1114 14:20:52.760853 1316169 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 14:20:52.761496 1316169 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 14:20:52.789638 1316169 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4.checksum
	I1114 14:20:52.850906 1316169 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:54 SystemTime:2023-11-14 14:20:52.840946506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 14:20:52.851205 1316169 cni.go:84] Creating CNI manager for ""
	I1114 14:20:52.851227 1316169 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1114 14:20:52.851242 1316169 start_flags.go:323] config:
	{Name:running-upgrade-042371 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:running-upgrade-042371 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.70.132 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1114 14:20:52.854171 1316169 out.go:177] * Starting control plane node running-upgrade-042371 in cluster running-upgrade-042371
	I1114 14:20:52.856222 1316169 cache.go:121] Beginning downloading kic base image for docker with crio
	I1114 14:20:52.862459 1316169 out.go:177] * Pulling base image ...
	I1114 14:20:52.864898 1316169 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1114 14:20:52.865007 1316169 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1114 14:20:52.884294 1316169 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1114 14:20:52.884319 1316169 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1114 14:20:52.942326 1316169 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1114 14:20:52.942480 1316169 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/running-upgrade-042371/config.json ...
	I1114 14:20:52.942822 1316169 cache.go:194] Successfully downloaded all kic artifacts
	I1114 14:20:52.942875 1316169 start.go:365] acquiring machines lock for running-upgrade-042371: {Name:mk4bcc999db280455017de1667deb5638aa4f55f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:20:52.942943 1316169 start.go:369] acquired machines lock for "running-upgrade-042371" in 37.235µs
	I1114 14:20:52.942966 1316169 start.go:96] Skipping create...Using existing machine configuration
	I1114 14:20:52.942989 1316169 fix.go:54] fixHost starting: 
	I1114 14:20:52.943263 1316169 cli_runner.go:164] Run: docker container inspect running-upgrade-042371 --format={{.State.Status}}
	I1114 14:20:52.943449 1316169 cache.go:107] acquiring lock: {Name:mkc3f9e8e80dc5cc581400c732c2f75eea7927c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:20:52.943518 1316169 cache.go:115] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1114 14:20:52.943535 1316169 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 88.328µs
	I1114 14:20:52.943549 1316169 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1114 14:20:52.943560 1316169 cache.go:107] acquiring lock: {Name:mk69a9d8cb51e3aa2e98715b9e677afbd5be8339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:20:52.943594 1316169 cache.go:115] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1114 14:20:52.943605 1316169 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 44.996µs
	I1114 14:20:52.943611 1316169 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1114 14:20:52.943621 1316169 cache.go:107] acquiring lock: {Name:mkeaa593a19bc596f779e94db679e747fb3e86dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:20:52.943646 1316169 cache.go:115] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1114 14:20:52.943654 1316169 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 33.871µs
	I1114 14:20:52.943660 1316169 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1114 14:20:52.943668 1316169 cache.go:107] acquiring lock: {Name:mkadd812de336b999f8e5a8809642906e01f1791 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:20:52.943696 1316169 cache.go:115] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1114 14:20:52.943704 1316169 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 36.177µs
	I1114 14:20:52.943710 1316169 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1114 14:20:52.943719 1316169 cache.go:107] acquiring lock: {Name:mkf5a96f0221a0606c0d1d34ec321ba5896544c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:20:52.943748 1316169 cache.go:115] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1114 14:20:52.943752 1316169 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 34.313µs
	I1114 14:20:52.943758 1316169 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1114 14:20:52.943771 1316169 cache.go:107] acquiring lock: {Name:mkd39b5b1ff28004cfb5f4307bd5b83ed11c8162 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:20:52.943804 1316169 cache.go:115] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1114 14:20:52.943815 1316169 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 44.48µs
	I1114 14:20:52.943822 1316169 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1114 14:20:52.943835 1316169 cache.go:107] acquiring lock: {Name:mk1e00d1f8459fd8271d7a94ac6e5793eb6baf5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:20:52.943866 1316169 cache.go:115] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1114 14:20:52.943875 1316169 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 43.987µs
	I1114 14:20:52.943881 1316169 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1114 14:20:52.943889 1316169 cache.go:107] acquiring lock: {Name:mkc2480d47d595c5e286f3ea4f50224c887ce0f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:20:52.943918 1316169 cache.go:115] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1114 14:20:52.943922 1316169 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 33.879µs
	I1114 14:20:52.943928 1316169 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1114 14:20:52.943934 1316169 cache.go:87] Successfully saved all images to host disk.
	I1114 14:20:52.976199 1316169 fix.go:102] recreateIfNeeded on running-upgrade-042371: state=Running err=<nil>
	W1114 14:20:52.976244 1316169 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 14:20:52.978854 1316169 out.go:177] * Updating the running docker "running-upgrade-042371" container ...
	I1114 14:20:52.980972 1316169 machine.go:88] provisioning docker machine ...
	I1114 14:20:52.981005 1316169 ubuntu.go:169] provisioning hostname "running-upgrade-042371"
	I1114 14:20:52.981096 1316169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-042371
	I1114 14:20:53.010539 1316169 main.go:141] libmachine: Using SSH client type: native
	I1114 14:20:53.010982 1316169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34467 <nil> <nil>}
	I1114 14:20:53.010995 1316169 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-042371 && echo "running-upgrade-042371" | sudo tee /etc/hostname
	I1114 14:20:53.183559 1316169 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-042371
	
	I1114 14:20:53.183642 1316169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-042371
	I1114 14:20:53.219621 1316169 main.go:141] libmachine: Using SSH client type: native
	I1114 14:20:53.220060 1316169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34467 <nil> <nil>}
	I1114 14:20:53.220083 1316169 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-042371' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-042371/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-042371' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 14:20:53.370501 1316169 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 14:20:53.370533 1316169 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17581-1186318/.minikube CaCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17581-1186318/.minikube}
	I1114 14:20:53.370552 1316169 ubuntu.go:177] setting up certificates
	I1114 14:20:53.370562 1316169 provision.go:83] configureAuth start
	I1114 14:20:53.370624 1316169 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-042371
	I1114 14:20:53.411952 1316169 provision.go:138] copyHostCerts
	I1114 14:20:53.412033 1316169 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem, removing ...
	I1114 14:20:53.412061 1316169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem
	I1114 14:20:53.412150 1316169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem (1082 bytes)
	I1114 14:20:53.412253 1316169 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem, removing ...
	I1114 14:20:53.412258 1316169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem
	I1114 14:20:53.412286 1316169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem (1123 bytes)
	I1114 14:20:53.412347 1316169 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem, removing ...
	I1114 14:20:53.412351 1316169 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem
	I1114 14:20:53.412375 1316169 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem (1675 bytes)
	I1114 14:20:53.412432 1316169 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-042371 san=[192.168.70.132 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-042371]
	I1114 14:20:53.744900 1316169 provision.go:172] copyRemoteCerts
	I1114 14:20:53.744979 1316169 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 14:20:53.745025 1316169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-042371
	I1114 14:20:53.767964 1316169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34467 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/running-upgrade-042371/id_rsa Username:docker}
	I1114 14:20:53.883213 1316169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 14:20:53.914093 1316169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1114 14:20:53.945574 1316169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 14:20:53.975741 1316169 provision.go:86] duration metric: configureAuth took 605.164735ms
	I1114 14:20:53.975766 1316169 ubuntu.go:193] setting minikube options for container-runtime
	I1114 14:20:53.975954 1316169 config.go:182] Loaded profile config "running-upgrade-042371": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1114 14:20:53.976076 1316169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-042371
	I1114 14:20:54.003733 1316169 main.go:141] libmachine: Using SSH client type: native
	I1114 14:20:54.004187 1316169 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34467 <nil> <nil>}
	I1114 14:20:54.004206 1316169 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 14:20:54.609327 1316169 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 14:20:54.609352 1316169 machine.go:91] provisioned docker machine in 1.628357153s
	I1114 14:20:54.609362 1316169 start.go:300] post-start starting for "running-upgrade-042371" (driver="docker")
	I1114 14:20:54.609374 1316169 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 14:20:54.609469 1316169 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 14:20:54.609511 1316169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-042371
	I1114 14:20:54.629998 1316169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34467 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/running-upgrade-042371/id_rsa Username:docker}
	I1114 14:20:54.730364 1316169 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 14:20:54.734460 1316169 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1114 14:20:54.734489 1316169 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1114 14:20:54.734500 1316169 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1114 14:20:54.734508 1316169 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1114 14:20:54.734519 1316169 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1186318/.minikube/addons for local assets ...
	I1114 14:20:54.734579 1316169 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1186318/.minikube/files for local assets ...
	I1114 14:20:54.734671 1316169 filesync.go:149] local asset: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem -> 11916902.pem in /etc/ssl/certs
	I1114 14:20:54.734781 1316169 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 14:20:54.743755 1316169 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem --> /etc/ssl/certs/11916902.pem (1708 bytes)
	I1114 14:20:54.767819 1316169 start.go:303] post-start completed in 158.439897ms
	I1114 14:20:54.767902 1316169 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 14:20:54.767952 1316169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-042371
	I1114 14:20:54.787311 1316169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34467 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/running-upgrade-042371/id_rsa Username:docker}
	I1114 14:20:54.883411 1316169 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1114 14:20:54.890597 1316169 fix.go:56] fixHost completed within 1.94761417s
	I1114 14:20:54.890624 1316169 start.go:83] releasing machines lock for "running-upgrade-042371", held for 1.947666174s
	I1114 14:20:54.890705 1316169 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-042371
	I1114 14:20:54.920446 1316169 ssh_runner.go:195] Run: cat /version.json
	I1114 14:20:54.920531 1316169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-042371
	I1114 14:20:54.920842 1316169 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 14:20:54.920948 1316169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-042371
	I1114 14:20:54.945973 1316169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34467 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/running-upgrade-042371/id_rsa Username:docker}
	I1114 14:20:54.948914 1316169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34467 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/running-upgrade-042371/id_rsa Username:docker}
	W1114 14:20:55.134647 1316169 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1114 14:20:55.134732 1316169 ssh_runner.go:195] Run: systemctl --version
	I1114 14:20:55.140987 1316169 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 14:20:55.359508 1316169 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1114 14:20:55.365751 1316169 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 14:20:55.393236 1316169 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1114 14:20:55.393327 1316169 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 14:20:55.437025 1316169 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 14:20:55.437052 1316169 start.go:472] detecting cgroup driver to use...
	I1114 14:20:55.437096 1316169 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1114 14:20:55.437160 1316169 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 14:20:55.475587 1316169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 14:20:55.488140 1316169 docker.go:203] disabling cri-docker service (if available) ...
	I1114 14:20:55.488217 1316169 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 14:20:55.500693 1316169 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 14:20:55.513750 1316169 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1114 14:20:55.530164 1316169 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1114 14:20:55.530240 1316169 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 14:20:55.709673 1316169 docker.go:219] disabling docker service ...
	I1114 14:20:55.709754 1316169 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 14:20:55.731509 1316169 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 14:20:55.756791 1316169 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 14:20:55.909966 1316169 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 14:20:56.063732 1316169 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 14:20:56.076948 1316169 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 14:20:56.096198 1316169 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1114 14:20:56.096268 1316169 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:20:56.112163 1316169 out.go:177] 
	W1114 14:20:56.114033 1316169 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1114 14:20:56.114055 1316169 out.go:239] * 
	* 
	W1114 14:20:56.115164 1316169 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1114 14:20:56.116880 1316169 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p running-upgrade-042371 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-11-14 14:20:56.143068813 +0000 UTC m=+2821.260719801
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-042371
helpers_test.go:235: (dbg) docker inspect running-upgrade-042371:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e3eaaa883d836f2e347f5828c1ddb6aa06ecdd953233b8db30ddfa95fc25ad26",
	        "Created": "2023-11-14T14:20:03.65718485Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1312377,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-14T14:20:04.111517232Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/e3eaaa883d836f2e347f5828c1ddb6aa06ecdd953233b8db30ddfa95fc25ad26/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e3eaaa883d836f2e347f5828c1ddb6aa06ecdd953233b8db30ddfa95fc25ad26/hostname",
	        "HostsPath": "/var/lib/docker/containers/e3eaaa883d836f2e347f5828c1ddb6aa06ecdd953233b8db30ddfa95fc25ad26/hosts",
	        "LogPath": "/var/lib/docker/containers/e3eaaa883d836f2e347f5828c1ddb6aa06ecdd953233b8db30ddfa95fc25ad26/e3eaaa883d836f2e347f5828c1ddb6aa06ecdd953233b8db30ddfa95fc25ad26-json.log",
	        "Name": "/running-upgrade-042371",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-042371:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "running-upgrade-042371",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/fe544e119b60ebf4c6e29568583ad9a59eb22bf278e847d1f976368ab17907b7-init/diff:/var/lib/docker/overlay2/b57a36709a8966bb418afd115c9ed2fbfb497e6e3913122d1e79b3787e105a8c/diff:/var/lib/docker/overlay2/f258c23dda6d6495ee025ac423a195573b63202b4159499346942d0daf993a30/diff:/var/lib/docker/overlay2/03593902601fa895c008a19fe09a8523e07bb9187807a2892b392517b8fcc472/diff:/var/lib/docker/overlay2/15f0500aeac9f8f83c44c0f9ff246ff219b782ab94d0c946bb1609d40814fd87/diff:/var/lib/docker/overlay2/7d85032fb8ca206e7d6597209764ae251384d6e7f058239a60466be8d139ebe6/diff:/var/lib/docker/overlay2/49de1f46616b80e69289679bcf57d869ed8c7c3b86999127f977abe813386c18/diff:/var/lib/docker/overlay2/fd0bb4f315d81b7ac057331778bb7326f28b4d8c7151ed42f65877bcddc55ba6/diff:/var/lib/docker/overlay2/cf6068359dbecdc739ca3cc73e0c29d54b5a49d67f616d321456391286a83ee5/diff:/var/lib/docker/overlay2/0919de4adae05a2c4271d0edac6ed2c6d8f7e22a39670aa43d9c11c1b89f9dff/diff:/var/lib/docker/overlay2/1f05e4
bfcec64d64cbd4a684bedb920d5b627845708e3c638d5328aa3ceec26c/diff:/var/lib/docker/overlay2/b073becb3969ef897234882abcafcc7d523af849680db9518ed18a42ac276906/diff:/var/lib/docker/overlay2/052f2c4c40d57ae5707d9ca5c600cf85765a019dc760562bed903f17a7170cbb/diff:/var/lib/docker/overlay2/42661e01b93876734098c6cc47f33601642eeba375ce08eda12b22530be9b9dc/diff:/var/lib/docker/overlay2/5ae033b966aab90ee404521399eb73abf20694a9d9aa02c959f814d0d0ba5878/diff:/var/lib/docker/overlay2/c2fa8381d075234ae8a55687049fa5b8f398d2934a8bcb1aedd8ace6adb0f725/diff:/var/lib/docker/overlay2/9f12203c55eccbb00077189132e2953fc0431a0aa39ea5901c6660a1c8b223e6/diff:/var/lib/docker/overlay2/68b38b40552b8dc92391a7506db4eff5856a0fb706d78fc513a5c5b87f355f02/diff:/var/lib/docker/overlay2/adeacacaedbbab391516420b362652fe4578b1908e090194a42f95c8fb535eda/diff:/var/lib/docker/overlay2/5c51fc97dc146788113fc7c742f2cfbabb061a8333e9028ce9edb884191a98e6/diff:/var/lib/docker/overlay2/8f644e89b231a64555e598f9e6b4621062643137e708057b11d7cd5417ee13de/diff:/var/lib/d
ocker/overlay2/5d63fd78fc856f9e8141d433afc83c58a9de696ce47772c5111799db00bf814b/diff:/var/lib/docker/overlay2/0606d9baa4dfe4981e414256661271a09a01d9cfea945bf1b6c94e20bbcdd75e/diff:/var/lib/docker/overlay2/efe35f845747a0d02c62142d5a361c723b26863b38ae9e8fc6bec0567c2528bd/diff:/var/lib/docker/overlay2/cc37ad386cca175dc93f07dfafff838280ea55b3fa3e2c75c6a3fd6ee7b5ffb2/diff:/var/lib/docker/overlay2/d5bd950087d7ea440fc8b34b560dce839d2c9a3d7bfc7cfe4239bab839452f07/diff:/var/lib/docker/overlay2/0db6ce9dc8f04f946aee773c848da413a2c81ff90f5f0e2d02063559123ae120/diff:/var/lib/docker/overlay2/ad70bfdacfe898dde74c040ae68e337ebe8d0de072632ab42842f6bf5e3dc396/diff:/var/lib/docker/overlay2/ce023ce72c41557b28dc93c9637ceaa65cf8b3fbb48892e33ee31503132f3430/diff:/var/lib/docker/overlay2/3c31653f722bd62d5a3737a86560a28eb8c46abacdb5a644c20de9fac090eb66/diff:/var/lib/docker/overlay2/031768795c00092f419babf1ab217f1cbe4a8af9ce70a2298f903f02e2125a08/diff:/var/lib/docker/overlay2/454234ccfb92ea2279334bba6a6c60a29735a7d70005da9729442bfbe9d
8fd7b/diff:/var/lib/docker/overlay2/8099e5b0995442cc441c93e990233e024183d72218a4bfa41511f15a06d3e5e6/diff:/var/lib/docker/overlay2/960ef1e7b35a99b224a2fcf160e18dab29c8efe9d3f73a19ca6acaadc1e1f335/diff:/var/lib/docker/overlay2/89d4d0a1caca735336538faf7ca51f0a9fb3f8868e40cb8260e3eff8af04bd0b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fe544e119b60ebf4c6e29568583ad9a59eb22bf278e847d1f976368ab17907b7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fe544e119b60ebf4c6e29568583ad9a59eb22bf278e847d1f976368ab17907b7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fe544e119b60ebf4c6e29568583ad9a59eb22bf278e847d1f976368ab17907b7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-042371",
	                "Source": "/var/lib/docker/volumes/running-upgrade-042371/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-042371",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-042371",
	                "name.minikube.sigs.k8s.io": "running-upgrade-042371",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9ea93e9e8a25caad816ff354b3106c86c1972f2ee5bb20d27836bd3cadecbf53",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34467"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34466"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34465"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34464"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/9ea93e9e8a25",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "running-upgrade-042371": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.70.132"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "e3eaaa883d83",
	                        "running-upgrade-042371"
	                    ],
	                    "NetworkID": "94787beeac27339b3f4cbad7510c3f0d9920b7b4f0cd57d629830287d6c096cd",
	                    "EndpointID": "66e9065d42b8ab4c807ffdf9f815561a05bc6f9de9cbc79797000f25556b93bf",
	                    "Gateway": "192.168.70.1",
	                    "IPAddress": "192.168.70.132",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:46:84",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-042371 -n running-upgrade-042371
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p running-upgrade-042371 -n running-upgrade-042371: exit status 4 (443.108098ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 14:20:56.522324 1316845 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-042371" does not appear in /home/jenkins/minikube-integration/17581-1186318/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-042371" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-042371" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-042371
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-042371: (3.041255512s)
--- FAIL: TestRunningBinaryUpgrade (77.94s)

                                                
                                    
x
+
TestMissingContainerUpgrade (142.13s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.17.0.2223457195.exe start -p missing-upgrade-895930 --memory=2200 --driver=docker  --container-runtime=crio
E1114 14:16:09.660532 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.17.0.2223457195.exe start -p missing-upgrade-895930 --memory=2200 --driver=docker  --container-runtime=crio: (1m40.641164044s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-895930
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-895930: (4.300511622s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-895930
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-895930 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1114 14:17:14.368440 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
E1114 14:17:32.702606 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
version_upgrade_test.go:342: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p missing-upgrade-895930 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (33.313882027s)

                                                
                                                
-- stdout --
	* [missing-upgrade-895930] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node missing-upgrade-895930 in cluster missing-upgrade-895930
	* Pulling base image ...
	* docker "missing-upgrade-895930" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 14:17:07.683417 1301264 out.go:296] Setting OutFile to fd 1 ...
	I1114 14:17:07.683664 1301264 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:17:07.683672 1301264 out.go:309] Setting ErrFile to fd 2...
	I1114 14:17:07.683680 1301264 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:17:07.683979 1301264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
	I1114 14:17:07.684363 1301264 out.go:303] Setting JSON to false
	I1114 14:17:07.685593 1301264 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39574,"bootTime":1699931854,"procs":352,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1114 14:17:07.685696 1301264 start.go:138] virtualization:  
	I1114 14:17:07.689649 1301264 out.go:177] * [missing-upgrade-895930] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1114 14:17:07.692205 1301264 notify.go:220] Checking for updates...
	I1114 14:17:07.697108 1301264 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 14:17:07.699229 1301264 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 14:17:07.701358 1301264 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 14:17:07.709383 1301264 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	I1114 14:17:07.712236 1301264 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1114 14:17:07.714515 1301264 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 14:17:07.717212 1301264 config.go:182] Loaded profile config "missing-upgrade-895930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1114 14:17:07.719775 1301264 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1114 14:17:07.721859 1301264 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 14:17:07.756721 1301264 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1114 14:17:07.756850 1301264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 14:17:07.904665 1301264 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-14 14:17:07.891120788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 14:17:07.904804 1301264 docker.go:295] overlay module found
	I1114 14:17:07.907772 1301264 out.go:177] * Using the docker driver based on existing profile
	I1114 14:17:07.912420 1301264 start.go:298] selected driver: docker
	I1114 14:17:07.912446 1301264 start.go:902] validating driver "docker" against &{Name:missing-upgrade-895930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-895930 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.187 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1114 14:17:07.912585 1301264 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 14:17:07.913324 1301264 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 14:17:08.026015 1301264 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-14 14:17:08.014735316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 14:17:08.026337 1301264 cni.go:84] Creating CNI manager for ""
	I1114 14:17:08.026360 1301264 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1114 14:17:08.026374 1301264 start_flags.go:323] config:
	{Name:missing-upgrade-895930 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:missing-upgrade-895930 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.187 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1114 14:17:08.028682 1301264 out.go:177] * Starting control plane node missing-upgrade-895930 in cluster missing-upgrade-895930
	I1114 14:17:08.030602 1301264 cache.go:121] Beginning downloading kic base image for docker with crio
	I1114 14:17:08.032384 1301264 out.go:177] * Pulling base image ...
	I1114 14:17:08.034257 1301264 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1114 14:17:08.034497 1301264 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1114 14:17:08.061340 1301264 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	I1114 14:17:08.061536 1301264 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local cache directory
	I1114 14:17:08.062095 1301264 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e to local cache
	W1114 14:17:08.118645 1301264 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1114 14:17:08.118794 1301264 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/missing-upgrade-895930/config.json ...
	I1114 14:17:08.119217 1301264 cache.go:107] acquiring lock: {Name:mkc3f9e8e80dc5cc581400c732c2f75eea7927c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:17:08.119313 1301264 cache.go:115] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1114 14:17:08.119321 1301264 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 112.368µs
	I1114 14:17:08.119330 1301264 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1114 14:17:08.119341 1301264 cache.go:107] acquiring lock: {Name:mk69a9d8cb51e3aa2e98715b9e677afbd5be8339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:17:08.119428 1301264 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.2
	I1114 14:17:08.119592 1301264 cache.go:107] acquiring lock: {Name:mkeaa593a19bc596f779e94db679e747fb3e86dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:17:08.119671 1301264 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1114 14:17:08.119749 1301264 cache.go:107] acquiring lock: {Name:mkadd812de336b999f8e5a8809642906e01f1791 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:17:08.119810 1301264 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.2
	I1114 14:17:08.119868 1301264 cache.go:107] acquiring lock: {Name:mkf5a96f0221a0606c0d1d34ec321ba5896544c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:17:08.119963 1301264 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.2
	I1114 14:17:08.120046 1301264 cache.go:107] acquiring lock: {Name:mkd39b5b1ff28004cfb5f4307bd5b83ed11c8162 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:17:08.120128 1301264 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1114 14:17:08.120204 1301264 cache.go:107] acquiring lock: {Name:mk1e00d1f8459fd8271d7a94ac6e5793eb6baf5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:17:08.120269 1301264 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1114 14:17:08.120338 1301264 cache.go:107] acquiring lock: {Name:mkc2480d47d595c5e286f3ea4f50224c887ce0f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:17:08.120407 1301264 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I1114 14:17:08.122405 1301264 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.2
	I1114 14:17:08.123880 1301264 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.2
	I1114 14:17:08.124827 1301264 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1114 14:17:08.125171 1301264 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1114 14:17:08.125433 1301264 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.2
	I1114 14:17:08.125693 1301264 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1114 14:17:08.125953 1301264 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.2
	W1114 14:17:08.480276 1301264 image.go:265] image registry.k8s.io/kube-proxy:v1.20.2 arch mismatch: want arm64 got amd64. fixing
	I1114 14:17:08.480360 1301264 cache.go:162] opening:  /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2
	W1114 14:17:08.510756 1301264 image.go:265] image registry.k8s.io/etcd:3.4.13-0 arch mismatch: want arm64 got amd64. fixing
	I1114 14:17:08.510887 1301264 cache.go:162] opening:  /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0
	I1114 14:17:08.515140 1301264 cache.go:162] opening:  /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2
	W1114 14:17:08.540857 1301264 image.go:265] image registry.k8s.io/coredns:1.7.0 arch mismatch: want arm64 got amd64. fixing
	I1114 14:17:08.540914 1301264 cache.go:162] opening:  /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0
	I1114 14:17:08.548745 1301264 cache.go:162] opening:  /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2
	I1114 14:17:08.557230 1301264 cache.go:162] opening:  /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1114 14:17:08.563501 1301264 cache.go:162] opening:  /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2
	    > gcr.io/k8s-minikube/kicbase...:  17.69 KiB / 287.99 MiB [>] 0.01% ? p/s ?I1114 14:17:08.700559 1301264 cache.go:157] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1114 14:17:08.700609 1301264 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 580.555556ms
	I1114 14:17:08.700623 1301264 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  513.37 KiB / 287.99 MiB [] 0.17% ? p/s ?I1114 14:17:08.995648 1301264 cache.go:157] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1114 14:17:08.995675 1301264 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 875.926506ms
	I1114 14:17:08.995688 1301264 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  7.80 MiB / 287.99 MiB [>_] 2.71% ? p/s ?I1114 14:17:09.109462 1301264 cache.go:157] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1114 14:17:09.109493 1301264 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 989.154773ms
	I1114 14:17:09.109506 1301264 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  18.99 MiB / 287.99 MiB  6.59% 31.52 MiB     > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 31.52 MiB I1114 14:17:09.523137 1301264 cache.go:157] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1114 14:17:09.523164 1301264 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 1.403821569s
	I1114 14:17:09.523177 1301264 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 31.52 MiB I1114 14:17:09.755462 1301264 cache.go:157] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1114 14:17:09.755497 1301264 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 1.635906331s
	I1114 14:17:09.755511 1301264 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  25.93 MiB / 287.99 MiB  9.00% 30.23 MiB     > gcr.io/k8s-minikube/kicbase...:  25.97 MiB / 287.99 MiB  9.02% 30.23 MiB     > gcr.io/k8s-minikube/kicbase...:  39.19 MiB / 287.99 MiB  13.61% 30.23 MiB    > gcr.io/k8s-minikube/kicbase...:  43.87 MiB / 287.99 MiB  15.23% 30.22 MiBI1114 14:17:10.523167 1301264 cache.go:157] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1114 14:17:10.523237 1301264 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 2.403363899s
	I1114 14:17:10.523265 1301264 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	    > gcr.io/k8s-minikube/kicbase...:  56.79 MiB / 287.99 MiB  19.72% 30.22 MiB    > gcr.io/k8s-minikube/kicbase...:  67.79 MiB / 287.99 MiB  23.54% 30.22 MiB    > gcr.io/k8s-minikube/kicbase...:  79.79 MiB / 287.99 MiB  27.71% 32.13 MiB    > gcr.io/k8s-minikube/kicbase...:  99.23 MiB / 287.99 MiB  34.46% 32.13 MiB    > gcr.io/k8s-minikube/kicbase...:  119.62 MiB / 287.99 MiB  41.54% 32.13 Mi    > gcr.io/k8s-minikube/kicbase...:  141.29 MiB / 287.99 MiB  49.06% 36.67 Mi    > gcr.io/k8s-minikube/kicbase...:  162.86 MiB / 287.99 MiB  56.55% 36.67 Mi    > gcr.io/k8s-minikube/kicbase...:  171.72 MiB / 287.99 MiB  59.63% 36.67 Mi    > gcr.io/k8s-minikube/kicbase...:  179.46 MiB / 287.99 MiB  62.31% 38.41 Mi    > gcr.io/k8s-minikube/kicbase...:  190.54 MiB / 287.99 MiB  66.16% 38.41 MiI1114 14:17:12.604215 1301264 cache.go:157] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1114 14:17:12.604311 1301264 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 4.484107319s
	I1114 14:17:12.604343 1301264 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1114 14:17:12.604438 1301264 cache.go:87] Successfully saved all images to host disk.
	    > gcr.io/k8s-minikube/kicbase...:  203.73 MiB / 287.99 MiB  70.74% 38.41 Mi    > gcr.io/k8s-minikube/kicbase...:  209.68 MiB / 287.99 MiB  72.81% 39.18 Mi    > gcr.io/k8s-minikube/kicbase...:  214.23 MiB / 287.99 MiB  74.39% 39.18 Mi    > gcr.io/k8s-minikube/kicbase...:  217.68 MiB / 287.99 MiB  75.59% 39.18 Mi    > gcr.io/k8s-minikube/kicbase...:  225.93 MiB / 287.99 MiB  78.45% 38.40 Mi    > gcr.io/k8s-minikube/kicbase...:  238.06 MiB / 287.99 MiB  82.66% 38.40 Mi    > gcr.io/k8s-minikube/kicbase...:  239.09 MiB / 287.99 MiB  83.02% 38.40 Mi    > gcr.io/k8s-minikube/kicbase...:  254.32 MiB / 287.99 MiB  88.31% 38.97 Mi    > gcr.io/k8s-minikube/kicbase...:  265.05 MiB / 287.99 MiB  92.03% 38.97 Mi    > gcr.io/k8s-minikube/kicbase...:  265.63 MiB / 287.99 MiB  92.24% 38.97 Mi    > gcr.io/k8s-minikube/kicbase...:  281.05 MiB / 287.99 MiB  97.59% 39.33 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.99% 39.33 Mi    > gcr.io/k8s-minikube/kicbase...:  287.96 MiB / 287.99 MiB  99.
99% 39.33 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 37.54 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 37.54 Mi    > gcr.io/k8s-minikube/kicbase...:  287.97 MiB / 287.99 MiB  99.99% 37.54 Mi    > gcr.io/k8s-minikube/kicbase...:  287.98 MiB / 287.99 MiB  100.00% 35.12 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 35.12 M    > gcr.io/k8s-minikube/kicbase...:  287.99 MiB / 287.99 MiB  100.00% 38.79 MI1114 14:17:16.090008 1301264 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e as a tarball
	I1114 14:17:16.090021 1301264 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from local cache
	I1114 14:17:17.032830 1301264 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e from cached tarball
	I1114 14:17:17.032869 1301264 cache.go:194] Successfully downloaded all kic artifacts
	I1114 14:17:17.032929 1301264 start.go:365] acquiring machines lock for missing-upgrade-895930: {Name:mk86bbfedd6011fa66645f447bdf42b9e85f3567 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:17:17.033004 1301264 start.go:369] acquired machines lock for "missing-upgrade-895930" in 48.729µs
	I1114 14:17:17.033029 1301264 start.go:96] Skipping create...Using existing machine configuration
	I1114 14:17:17.033054 1301264 fix.go:54] fixHost starting: 
	I1114 14:17:17.033331 1301264 cli_runner.go:164] Run: docker container inspect missing-upgrade-895930 --format={{.State.Status}}
	W1114 14:17:17.076496 1301264 cli_runner.go:211] docker container inspect missing-upgrade-895930 --format={{.State.Status}} returned with exit code 1
	I1114 14:17:17.076588 1301264 fix.go:102] recreateIfNeeded on missing-upgrade-895930: state= err=unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:17.076611 1301264 fix.go:107] machineExists: false. err=machine does not exist
	I1114 14:17:17.078824 1301264 out.go:177] * docker "missing-upgrade-895930" container is missing, will recreate.
	I1114 14:17:17.081089 1301264 delete.go:124] DEMOLISHING missing-upgrade-895930 ...
	I1114 14:17:17.081195 1301264 cli_runner.go:164] Run: docker container inspect missing-upgrade-895930 --format={{.State.Status}}
	W1114 14:17:17.102217 1301264 cli_runner.go:211] docker container inspect missing-upgrade-895930 --format={{.State.Status}} returned with exit code 1
	W1114 14:17:17.102277 1301264 stop.go:75] unable to get state: unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:17.102296 1301264 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:17.102759 1301264 cli_runner.go:164] Run: docker container inspect missing-upgrade-895930 --format={{.State.Status}}
	W1114 14:17:17.121730 1301264 cli_runner.go:211] docker container inspect missing-upgrade-895930 --format={{.State.Status}} returned with exit code 1
	I1114 14:17:17.121798 1301264 delete.go:82] Unable to get host status for missing-upgrade-895930, assuming it has already been deleted: state: unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:17.121860 1301264 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-895930
	W1114 14:17:17.140667 1301264 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-895930 returned with exit code 1
	I1114 14:17:17.140701 1301264 kic.go:371] could not find the container missing-upgrade-895930 to remove it. will try anyways
	I1114 14:17:17.140756 1301264 cli_runner.go:164] Run: docker container inspect missing-upgrade-895930 --format={{.State.Status}}
	W1114 14:17:17.159589 1301264 cli_runner.go:211] docker container inspect missing-upgrade-895930 --format={{.State.Status}} returned with exit code 1
	W1114 14:17:17.159643 1301264 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:17.159703 1301264 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-895930 /bin/bash -c "sudo init 0"
	W1114 14:17:17.178367 1301264 cli_runner.go:211] docker exec --privileged -t missing-upgrade-895930 /bin/bash -c "sudo init 0" returned with exit code 1
	I1114 14:17:17.178397 1301264 oci.go:650] error shutdown missing-upgrade-895930: docker exec --privileged -t missing-upgrade-895930 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:18.178609 1301264 cli_runner.go:164] Run: docker container inspect missing-upgrade-895930 --format={{.State.Status}}
	W1114 14:17:18.201212 1301264 cli_runner.go:211] docker container inspect missing-upgrade-895930 --format={{.State.Status}} returned with exit code 1
	I1114 14:17:18.201289 1301264 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:18.201302 1301264 oci.go:664] temporary error: container missing-upgrade-895930 status is  but expect it to be exited
	I1114 14:17:18.201332 1301264 retry.go:31] will retry after 587.585082ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:18.789139 1301264 cli_runner.go:164] Run: docker container inspect missing-upgrade-895930 --format={{.State.Status}}
	W1114 14:17:18.808038 1301264 cli_runner.go:211] docker container inspect missing-upgrade-895930 --format={{.State.Status}} returned with exit code 1
	I1114 14:17:18.808122 1301264 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:18.808165 1301264 oci.go:664] temporary error: container missing-upgrade-895930 status is  but expect it to be exited
	I1114 14:17:18.808193 1301264 retry.go:31] will retry after 492.343518ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:19.300831 1301264 cli_runner.go:164] Run: docker container inspect missing-upgrade-895930 --format={{.State.Status}}
	W1114 14:17:19.327732 1301264 cli_runner.go:211] docker container inspect missing-upgrade-895930 --format={{.State.Status}} returned with exit code 1
	I1114 14:17:19.327811 1301264 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:19.327839 1301264 oci.go:664] temporary error: container missing-upgrade-895930 status is  but expect it to be exited
	I1114 14:17:19.327865 1301264 retry.go:31] will retry after 1.556283429s: couldn't verify container is exited. %v: unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:20.884364 1301264 cli_runner.go:164] Run: docker container inspect missing-upgrade-895930 --format={{.State.Status}}
	W1114 14:17:20.905213 1301264 cli_runner.go:211] docker container inspect missing-upgrade-895930 --format={{.State.Status}} returned with exit code 1
	I1114 14:17:20.905274 1301264 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:20.905283 1301264 oci.go:664] temporary error: container missing-upgrade-895930 status is  but expect it to be exited
	I1114 14:17:20.905309 1301264 retry.go:31] will retry after 1.45379757s: couldn't verify container is exited. %v: unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:22.359376 1301264 cli_runner.go:164] Run: docker container inspect missing-upgrade-895930 --format={{.State.Status}}
	W1114 14:17:22.385781 1301264 cli_runner.go:211] docker container inspect missing-upgrade-895930 --format={{.State.Status}} returned with exit code 1
	I1114 14:17:22.385844 1301264 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:22.385853 1301264 oci.go:664] temporary error: container missing-upgrade-895930 status is  but expect it to be exited
	I1114 14:17:22.385883 1301264 retry.go:31] will retry after 2.4152239s: couldn't verify container is exited. %v: unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:24.801297 1301264 cli_runner.go:164] Run: docker container inspect missing-upgrade-895930 --format={{.State.Status}}
	W1114 14:17:24.835859 1301264 cli_runner.go:211] docker container inspect missing-upgrade-895930 --format={{.State.Status}} returned with exit code 1
	I1114 14:17:24.835929 1301264 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:24.835940 1301264 oci.go:664] temporary error: container missing-upgrade-895930 status is  but expect it to be exited
	I1114 14:17:24.835966 1301264 retry.go:31] will retry after 3.495969583s: couldn't verify container is exited. %v: unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:28.332130 1301264 cli_runner.go:164] Run: docker container inspect missing-upgrade-895930 --format={{.State.Status}}
	W1114 14:17:28.364057 1301264 cli_runner.go:211] docker container inspect missing-upgrade-895930 --format={{.State.Status}} returned with exit code 1
	I1114 14:17:28.364115 1301264 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:28.364123 1301264 oci.go:664] temporary error: container missing-upgrade-895930 status is  but expect it to be exited
	I1114 14:17:28.364149 1301264 retry.go:31] will retry after 4.045994875s: couldn't verify container is exited. %v: unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:32.412714 1301264 cli_runner.go:164] Run: docker container inspect missing-upgrade-895930 --format={{.State.Status}}
	W1114 14:17:32.445473 1301264 cli_runner.go:211] docker container inspect missing-upgrade-895930 --format={{.State.Status}} returned with exit code 1
	I1114 14:17:32.445558 1301264 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	I1114 14:17:32.445567 1301264 oci.go:664] temporary error: container missing-upgrade-895930 status is  but expect it to be exited
	I1114 14:17:32.445606 1301264 oci.go:88] couldn't shut down missing-upgrade-895930 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-895930": docker container inspect missing-upgrade-895930 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-895930
	 
	I1114 14:17:32.445683 1301264 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-895930
	I1114 14:17:32.478269 1301264 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-895930
	W1114 14:17:32.520691 1301264 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-895930 returned with exit code 1
	I1114 14:17:32.520885 1301264 cli_runner.go:164] Run: docker network inspect missing-upgrade-895930 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1114 14:17:32.566321 1301264 cli_runner.go:164] Run: docker network rm missing-upgrade-895930
	I1114 14:17:32.698285 1301264 fix.go:114] Sleeping 1 second for extra luck!
	I1114 14:17:33.699048 1301264 start.go:125] createHost starting for "" (driver="docker")
	I1114 14:17:33.701344 1301264 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1114 14:17:33.701494 1301264 start.go:159] libmachine.API.Create for "missing-upgrade-895930" (driver="docker")
	I1114 14:17:33.701520 1301264 client.go:168] LocalClient.Create starting
	I1114 14:17:33.701599 1301264 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem
	I1114 14:17:33.701637 1301264 main.go:141] libmachine: Decoding PEM data...
	I1114 14:17:33.701655 1301264 main.go:141] libmachine: Parsing certificate...
	I1114 14:17:33.701717 1301264 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem
	I1114 14:17:33.701741 1301264 main.go:141] libmachine: Decoding PEM data...
	I1114 14:17:33.701756 1301264 main.go:141] libmachine: Parsing certificate...
	I1114 14:17:33.702024 1301264 cli_runner.go:164] Run: docker network inspect missing-upgrade-895930 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1114 14:17:33.722870 1301264 cli_runner.go:211] docker network inspect missing-upgrade-895930 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1114 14:17:33.722953 1301264 network_create.go:281] running [docker network inspect missing-upgrade-895930] to gather additional debugging logs...
	I1114 14:17:33.722974 1301264 cli_runner.go:164] Run: docker network inspect missing-upgrade-895930
	W1114 14:17:33.742345 1301264 cli_runner.go:211] docker network inspect missing-upgrade-895930 returned with exit code 1
	I1114 14:17:33.742378 1301264 network_create.go:284] error running [docker network inspect missing-upgrade-895930]: docker network inspect missing-upgrade-895930: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-895930 not found
	I1114 14:17:33.742397 1301264 network_create.go:286] output of [docker network inspect missing-upgrade-895930]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-895930 not found
	
	** /stderr **
	I1114 14:17:33.742503 1301264 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1114 14:17:33.766611 1301264 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d807bcb05d12 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:da:22:90:37} reservation:<nil>}
	I1114 14:17:33.767023 1301264 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-a4b8be742be9 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:11:48:14:b1} reservation:<nil>}
	I1114 14:17:33.767338 1301264 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-030d545048b3 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:68:6a:ba:c3} reservation:<nil>}
	I1114 14:17:33.767752 1301264 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002b01c00}
	I1114 14:17:33.767772 1301264 network_create.go:124] attempt to create docker network missing-upgrade-895930 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1114 14:17:33.767828 1301264 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-895930 missing-upgrade-895930
	I1114 14:17:33.858991 1301264 network_create.go:108] docker network missing-upgrade-895930 192.168.76.0/24 created
	I1114 14:17:33.859023 1301264 kic.go:121] calculated static IP "192.168.76.2" for the "missing-upgrade-895930" container
	I1114 14:17:33.859096 1301264 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1114 14:17:33.880282 1301264 cli_runner.go:164] Run: docker volume create missing-upgrade-895930 --label name.minikube.sigs.k8s.io=missing-upgrade-895930 --label created_by.minikube.sigs.k8s.io=true
	I1114 14:17:33.899176 1301264 oci.go:103] Successfully created a docker volume missing-upgrade-895930
	I1114 14:17:33.899262 1301264 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-895930-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-895930 --entrypoint /usr/bin/test -v missing-upgrade-895930:/var gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e -d /var/lib
	I1114 14:17:34.433995 1301264 oci.go:107] Successfully prepared a docker volume missing-upgrade-895930
	I1114 14:17:34.434034 1301264 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	W1114 14:17:34.434179 1301264 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1114 14:17:34.434302 1301264 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1114 14:17:34.525764 1301264 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-895930 --name missing-upgrade-895930 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-895930 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-895930 --network missing-upgrade-895930 --ip 192.168.76.2 --volume missing-upgrade-895930:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e
	I1114 14:17:34.924858 1301264 cli_runner.go:164] Run: docker container inspect missing-upgrade-895930 --format={{.State.Running}}
	I1114 14:17:34.963589 1301264 cli_runner.go:164] Run: docker container inspect missing-upgrade-895930 --format={{.State.Status}}
	I1114 14:17:34.993254 1301264 cli_runner.go:164] Run: docker exec missing-upgrade-895930 stat /var/lib/dpkg/alternatives/iptables
	I1114 14:17:35.087055 1301264 oci.go:144] the created container "missing-upgrade-895930" has a running status.
	I1114 14:17:35.087094 1301264 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/missing-upgrade-895930/id_rsa...
	I1114 14:17:35.809016 1301264 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/missing-upgrade-895930/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1114 14:17:35.842022 1301264 cli_runner.go:164] Run: docker container inspect missing-upgrade-895930 --format={{.State.Status}}
	I1114 14:17:35.868734 1301264 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1114 14:17:35.868756 1301264 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-895930 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1114 14:17:35.968523 1301264 cli_runner.go:164] Run: docker container inspect missing-upgrade-895930 --format={{.State.Status}}
	I1114 14:17:35.993771 1301264 machine.go:88] provisioning docker machine ...
	I1114 14:17:35.993815 1301264 ubuntu.go:169] provisioning hostname "missing-upgrade-895930"
	I1114 14:17:35.993888 1301264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-895930
	I1114 14:17:36.025933 1301264 main.go:141] libmachine: Using SSH client type: native
	I1114 14:17:36.026409 1301264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34449 <nil> <nil>}
	I1114 14:17:36.026426 1301264 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-895930 && echo "missing-upgrade-895930" | sudo tee /etc/hostname
	I1114 14:17:36.190408 1301264 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-895930
	
	I1114 14:17:36.190563 1301264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-895930
	I1114 14:17:36.218249 1301264 main.go:141] libmachine: Using SSH client type: native
	I1114 14:17:36.218659 1301264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34449 <nil> <nil>}
	I1114 14:17:36.218676 1301264 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-895930' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-895930/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-895930' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 14:17:36.375707 1301264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 14:17:36.375809 1301264 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17581-1186318/.minikube CaCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17581-1186318/.minikube}
	I1114 14:17:36.375876 1301264 ubuntu.go:177] setting up certificates
	I1114 14:17:36.375899 1301264 provision.go:83] configureAuth start
	I1114 14:17:36.376001 1301264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-895930
	I1114 14:17:36.413181 1301264 provision.go:138] copyHostCerts
	I1114 14:17:36.413244 1301264 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem, removing ...
	I1114 14:17:36.413253 1301264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem
	I1114 14:17:36.413328 1301264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem (1675 bytes)
	I1114 14:17:36.413431 1301264 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem, removing ...
	I1114 14:17:36.413437 1301264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem
	I1114 14:17:36.413463 1301264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem (1082 bytes)
	I1114 14:17:36.413530 1301264 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem, removing ...
	I1114 14:17:36.413534 1301264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem
	I1114 14:17:36.413558 1301264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem (1123 bytes)
	I1114 14:17:36.413634 1301264 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-895930 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-895930]
	I1114 14:17:36.868383 1301264 provision.go:172] copyRemoteCerts
	I1114 14:17:36.868497 1301264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 14:17:36.868591 1301264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-895930
	I1114 14:17:36.895629 1301264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/missing-upgrade-895930/id_rsa Username:docker}
	I1114 14:17:36.999251 1301264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 14:17:37.037503 1301264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1114 14:17:37.064125 1301264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 14:17:37.090819 1301264 provision.go:86] duration metric: configureAuth took 714.894602ms
	I1114 14:17:37.090846 1301264 ubuntu.go:193] setting minikube options for container-runtime
	I1114 14:17:37.091034 1301264 config.go:182] Loaded profile config "missing-upgrade-895930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1114 14:17:37.091142 1301264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-895930
	I1114 14:17:37.110007 1301264 main.go:141] libmachine: Using SSH client type: native
	I1114 14:17:37.110446 1301264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34449 <nil> <nil>}
	I1114 14:17:37.110469 1301264 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 14:17:37.585381 1301264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 14:17:37.585401 1301264 machine.go:91] provisioned docker machine in 1.591603495s
	I1114 14:17:37.585410 1301264 client.go:171] LocalClient.Create took 3.883882445s
	I1114 14:17:37.585422 1301264 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-895930" took 3.883929584s
	I1114 14:17:37.585429 1301264 start.go:300] post-start starting for "missing-upgrade-895930" (driver="docker")
	I1114 14:17:37.585445 1301264 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 14:17:37.585509 1301264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 14:17:37.585548 1301264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-895930
	I1114 14:17:37.606410 1301264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/missing-upgrade-895930/id_rsa Username:docker}
	I1114 14:17:37.723434 1301264 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 14:17:37.732988 1301264 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1114 14:17:37.733053 1301264 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1114 14:17:37.733080 1301264 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1114 14:17:37.733102 1301264 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1114 14:17:37.733136 1301264 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1186318/.minikube/addons for local assets ...
	I1114 14:17:37.733210 1301264 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1186318/.minikube/files for local assets ...
	I1114 14:17:37.733319 1301264 filesync.go:149] local asset: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem -> 11916902.pem in /etc/ssl/certs
	I1114 14:17:37.733475 1301264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 14:17:37.744447 1301264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem --> /etc/ssl/certs/11916902.pem (1708 bytes)
	I1114 14:17:37.774009 1301264 start.go:303] post-start completed in 188.55827ms
	I1114 14:17:37.774434 1301264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-895930
	I1114 14:17:37.799108 1301264 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/missing-upgrade-895930/config.json ...
	I1114 14:17:37.799386 1301264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 14:17:37.799441 1301264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-895930
	I1114 14:17:37.832694 1301264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/missing-upgrade-895930/id_rsa Username:docker}
	I1114 14:17:37.935326 1301264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1114 14:17:37.941304 1301264 start.go:128] duration metric: createHost completed in 4.242197921s
	I1114 14:17:37.941401 1301264 cli_runner.go:164] Run: docker container inspect missing-upgrade-895930 --format={{.State.Status}}
	W1114 14:17:37.965995 1301264 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 14:17:37.966024 1301264 machine.go:88] provisioning docker machine ...
	I1114 14:17:37.966042 1301264 ubuntu.go:169] provisioning hostname "missing-upgrade-895930"
	I1114 14:17:37.966179 1301264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-895930
	I1114 14:17:37.989935 1301264 main.go:141] libmachine: Using SSH client type: native
	I1114 14:17:37.990357 1301264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34449 <nil> <nil>}
	I1114 14:17:37.990377 1301264 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-895930 && echo "missing-upgrade-895930" | sudo tee /etc/hostname
	I1114 14:17:38.154154 1301264 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-895930
	
	I1114 14:17:38.154232 1301264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-895930
	I1114 14:17:38.181410 1301264 main.go:141] libmachine: Using SSH client type: native
	I1114 14:17:38.181827 1301264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34449 <nil> <nil>}
	I1114 14:17:38.181854 1301264 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-895930' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-895930/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-895930' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 14:17:38.341843 1301264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 14:17:38.341877 1301264 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17581-1186318/.minikube CaCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17581-1186318/.minikube}
	I1114 14:17:38.341907 1301264 ubuntu.go:177] setting up certificates
	I1114 14:17:38.341916 1301264 provision.go:83] configureAuth start
	I1114 14:17:38.341975 1301264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-895930
	I1114 14:17:38.369489 1301264 provision.go:138] copyHostCerts
	I1114 14:17:38.369565 1301264 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem, removing ...
	I1114 14:17:38.369579 1301264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem
	I1114 14:17:38.369654 1301264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem (1123 bytes)
	I1114 14:17:38.369756 1301264 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem, removing ...
	I1114 14:17:38.369775 1301264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem
	I1114 14:17:38.369805 1301264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem (1675 bytes)
	I1114 14:17:38.369897 1301264 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem, removing ...
	I1114 14:17:38.369908 1301264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem
	I1114 14:17:38.369933 1301264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem (1082 bytes)
	I1114 14:17:38.369979 1301264 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-895930 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-895930]
	I1114 14:17:38.993632 1301264 provision.go:172] copyRemoteCerts
	I1114 14:17:38.993728 1301264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 14:17:38.993801 1301264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-895930
	I1114 14:17:39.026129 1301264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/missing-upgrade-895930/id_rsa Username:docker}
	I1114 14:17:39.131159 1301264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 14:17:39.175027 1301264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1114 14:17:39.225944 1301264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1114 14:17:39.269667 1301264 provision.go:86] duration metric: configureAuth took 927.737889ms
	I1114 14:17:39.269739 1301264 ubuntu.go:193] setting minikube options for container-runtime
	I1114 14:17:39.269970 1301264 config.go:182] Loaded profile config "missing-upgrade-895930": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1114 14:17:39.270128 1301264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-895930
	I1114 14:17:39.316815 1301264 main.go:141] libmachine: Using SSH client type: native
	I1114 14:17:39.317220 1301264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34449 <nil> <nil>}
	I1114 14:17:39.317236 1301264 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 14:17:39.663897 1301264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 14:17:39.663967 1301264 machine.go:91] provisioned docker machine in 1.697934049s
	I1114 14:17:39.663993 1301264 start.go:300] post-start starting for "missing-upgrade-895930" (driver="docker")
	I1114 14:17:39.664025 1301264 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 14:17:39.664109 1301264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 14:17:39.664185 1301264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-895930
	I1114 14:17:39.683658 1301264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/missing-upgrade-895930/id_rsa Username:docker}
	I1114 14:17:39.786307 1301264 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 14:17:39.790932 1301264 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1114 14:17:39.790963 1301264 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1114 14:17:39.790975 1301264 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1114 14:17:39.790982 1301264 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1114 14:17:39.790994 1301264 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1186318/.minikube/addons for local assets ...
	I1114 14:17:39.791057 1301264 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1186318/.minikube/files for local assets ...
	I1114 14:17:39.791138 1301264 filesync.go:149] local asset: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem -> 11916902.pem in /etc/ssl/certs
	I1114 14:17:39.791244 1301264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 14:17:39.800486 1301264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem --> /etc/ssl/certs/11916902.pem (1708 bytes)
	I1114 14:17:39.824126 1301264 start.go:303] post-start completed in 160.097363ms
	I1114 14:17:39.824224 1301264 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 14:17:39.824299 1301264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-895930
	I1114 14:17:39.842967 1301264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/missing-upgrade-895930/id_rsa Username:docker}
	I1114 14:17:39.938894 1301264 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1114 14:17:39.946672 1301264 fix.go:56] fixHost completed within 22.913624987s
	I1114 14:17:39.946703 1301264 start.go:83] releasing machines lock for "missing-upgrade-895930", held for 22.91368536s
	I1114 14:17:39.946783 1301264 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-895930
	I1114 14:17:39.966817 1301264 ssh_runner.go:195] Run: cat /version.json
	I1114 14:17:39.966872 1301264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-895930
	I1114 14:17:39.966936 1301264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 14:17:39.967014 1301264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-895930
	I1114 14:17:39.989272 1301264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/missing-upgrade-895930/id_rsa Username:docker}
	I1114 14:17:39.989166 1301264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34449 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/missing-upgrade-895930/id_rsa Username:docker}
	W1114 14:17:40.085719 1301264 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1114 14:17:40.085823 1301264 ssh_runner.go:195] Run: systemctl --version
	I1114 14:17:40.196628 1301264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 14:17:40.320486 1301264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1114 14:17:40.326424 1301264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 14:17:40.353296 1301264 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1114 14:17:40.353429 1301264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 14:17:40.393061 1301264 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 14:17:40.393132 1301264 start.go:472] detecting cgroup driver to use...
	I1114 14:17:40.393179 1301264 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1114 14:17:40.393250 1301264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 14:17:40.423040 1301264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 14:17:40.435717 1301264 docker.go:203] disabling cri-docker service (if available) ...
	I1114 14:17:40.435841 1301264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 14:17:40.449752 1301264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 14:17:40.462635 1301264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1114 14:17:40.475942 1301264 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1114 14:17:40.476032 1301264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 14:17:40.600966 1301264 docker.go:219] disabling docker service ...
	I1114 14:17:40.601077 1301264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 14:17:40.615698 1301264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 14:17:40.628818 1301264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 14:17:40.739855 1301264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 14:17:40.851592 1301264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 14:17:40.863810 1301264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 14:17:40.882485 1301264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1114 14:17:40.882589 1301264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:17:40.897005 1301264 out.go:177] 
	W1114 14:17:40.898939 1301264 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1114 14:17:40.899010 1301264 out.go:239] * 
	* 
	W1114 14:17:40.900049 1301264 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1114 14:17:40.902583 1301264 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:344: failed missing container upgrade from v1.17.0. args: out/minikube-linux-arm64 start -p missing-upgrade-895930 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 90
version_upgrade_test.go:346: *** TestMissingContainerUpgrade FAILED at 2023-11-14 14:17:40.951681392 +0000 UTC m=+2626.069332381
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-895930
helpers_test.go:235: (dbg) docker inspect missing-upgrade-895930:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "45fe1445ea871443b320ef3bb6688f3152181f95ee5ee62ff63edfaf413b7458",
	        "Created": "2023-11-14T14:17:34.554463877Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1302442,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-11-14T14:17:34.914708022Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:9b79b8263a5873a7b57b8bb7698df1f71e90108b3174dea92dc6c576c0a9dbf9",
	        "ResolvConfPath": "/var/lib/docker/containers/45fe1445ea871443b320ef3bb6688f3152181f95ee5ee62ff63edfaf413b7458/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/45fe1445ea871443b320ef3bb6688f3152181f95ee5ee62ff63edfaf413b7458/hostname",
	        "HostsPath": "/var/lib/docker/containers/45fe1445ea871443b320ef3bb6688f3152181f95ee5ee62ff63edfaf413b7458/hosts",
	        "LogPath": "/var/lib/docker/containers/45fe1445ea871443b320ef3bb6688f3152181f95ee5ee62ff63edfaf413b7458/45fe1445ea871443b320ef3bb6688f3152181f95ee5ee62ff63edfaf413b7458-json.log",
	        "Name": "/missing-upgrade-895930",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-895930:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "missing-upgrade-895930",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0fe55d2705a5dd301f055a00f2bd9ba9d6226f9422af4f94dbeebf20f1a24cd3-init/diff:/var/lib/docker/overlay2/b57a36709a8966bb418afd115c9ed2fbfb497e6e3913122d1e79b3787e105a8c/diff:/var/lib/docker/overlay2/f258c23dda6d6495ee025ac423a195573b63202b4159499346942d0daf993a30/diff:/var/lib/docker/overlay2/03593902601fa895c008a19fe09a8523e07bb9187807a2892b392517b8fcc472/diff:/var/lib/docker/overlay2/15f0500aeac9f8f83c44c0f9ff246ff219b782ab94d0c946bb1609d40814fd87/diff:/var/lib/docker/overlay2/7d85032fb8ca206e7d6597209764ae251384d6e7f058239a60466be8d139ebe6/diff:/var/lib/docker/overlay2/49de1f46616b80e69289679bcf57d869ed8c7c3b86999127f977abe813386c18/diff:/var/lib/docker/overlay2/fd0bb4f315d81b7ac057331778bb7326f28b4d8c7151ed42f65877bcddc55ba6/diff:/var/lib/docker/overlay2/cf6068359dbecdc739ca3cc73e0c29d54b5a49d67f616d321456391286a83ee5/diff:/var/lib/docker/overlay2/0919de4adae05a2c4271d0edac6ed2c6d8f7e22a39670aa43d9c11c1b89f9dff/diff:/var/lib/docker/overlay2/1f05e4
bfcec64d64cbd4a684bedb920d5b627845708e3c638d5328aa3ceec26c/diff:/var/lib/docker/overlay2/b073becb3969ef897234882abcafcc7d523af849680db9518ed18a42ac276906/diff:/var/lib/docker/overlay2/052f2c4c40d57ae5707d9ca5c600cf85765a019dc760562bed903f17a7170cbb/diff:/var/lib/docker/overlay2/42661e01b93876734098c6cc47f33601642eeba375ce08eda12b22530be9b9dc/diff:/var/lib/docker/overlay2/5ae033b966aab90ee404521399eb73abf20694a9d9aa02c959f814d0d0ba5878/diff:/var/lib/docker/overlay2/c2fa8381d075234ae8a55687049fa5b8f398d2934a8bcb1aedd8ace6adb0f725/diff:/var/lib/docker/overlay2/9f12203c55eccbb00077189132e2953fc0431a0aa39ea5901c6660a1c8b223e6/diff:/var/lib/docker/overlay2/68b38b40552b8dc92391a7506db4eff5856a0fb706d78fc513a5c5b87f355f02/diff:/var/lib/docker/overlay2/adeacacaedbbab391516420b362652fe4578b1908e090194a42f95c8fb535eda/diff:/var/lib/docker/overlay2/5c51fc97dc146788113fc7c742f2cfbabb061a8333e9028ce9edb884191a98e6/diff:/var/lib/docker/overlay2/8f644e89b231a64555e598f9e6b4621062643137e708057b11d7cd5417ee13de/diff:/var/lib/d
ocker/overlay2/5d63fd78fc856f9e8141d433afc83c58a9de696ce47772c5111799db00bf814b/diff:/var/lib/docker/overlay2/0606d9baa4dfe4981e414256661271a09a01d9cfea945bf1b6c94e20bbcdd75e/diff:/var/lib/docker/overlay2/efe35f845747a0d02c62142d5a361c723b26863b38ae9e8fc6bec0567c2528bd/diff:/var/lib/docker/overlay2/cc37ad386cca175dc93f07dfafff838280ea55b3fa3e2c75c6a3fd6ee7b5ffb2/diff:/var/lib/docker/overlay2/d5bd950087d7ea440fc8b34b560dce839d2c9a3d7bfc7cfe4239bab839452f07/diff:/var/lib/docker/overlay2/0db6ce9dc8f04f946aee773c848da413a2c81ff90f5f0e2d02063559123ae120/diff:/var/lib/docker/overlay2/ad70bfdacfe898dde74c040ae68e337ebe8d0de072632ab42842f6bf5e3dc396/diff:/var/lib/docker/overlay2/ce023ce72c41557b28dc93c9637ceaa65cf8b3fbb48892e33ee31503132f3430/diff:/var/lib/docker/overlay2/3c31653f722bd62d5a3737a86560a28eb8c46abacdb5a644c20de9fac090eb66/diff:/var/lib/docker/overlay2/031768795c00092f419babf1ab217f1cbe4a8af9ce70a2298f903f02e2125a08/diff:/var/lib/docker/overlay2/454234ccfb92ea2279334bba6a6c60a29735a7d70005da9729442bfbe9d
8fd7b/diff:/var/lib/docker/overlay2/8099e5b0995442cc441c93e990233e024183d72218a4bfa41511f15a06d3e5e6/diff:/var/lib/docker/overlay2/960ef1e7b35a99b224a2fcf160e18dab29c8efe9d3f73a19ca6acaadc1e1f335/diff:/var/lib/docker/overlay2/89d4d0a1caca735336538faf7ca51f0a9fb3f8868e40cb8260e3eff8af04bd0b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0fe55d2705a5dd301f055a00f2bd9ba9d6226f9422af4f94dbeebf20f1a24cd3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0fe55d2705a5dd301f055a00f2bd9ba9d6226f9422af4f94dbeebf20f1a24cd3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0fe55d2705a5dd301f055a00f2bd9ba9d6226f9422af4f94dbeebf20f1a24cd3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-895930",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-895930/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-895930",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-895930",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-895930",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f61347fde34f133fbaf229cbc3f5692dc154116d38a028a90803bcbf17b6e595",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34449"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34448"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34445"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34447"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34446"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f61347fde34f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "missing-upgrade-895930": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "45fe1445ea87",
	                        "missing-upgrade-895930"
	                    ],
	                    "NetworkID": "877e67db101f1431f116c2132af1e96cac63a17edb161fa63750ce226e963cfb",
	                    "EndpointID": "c013ce4bb6dcd596b158ad9135544494448d79b2c550f52fe0402c31e45c17e4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-895930 -n missing-upgrade-895930
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p missing-upgrade-895930 -n missing-upgrade-895930: exit status 6 (344.487431ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 14:17:41.297926 1303577 status.go:415] kubeconfig endpoint: got: 192.168.59.187:8443, want: 192.168.76.2:8443

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-895930" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-895930" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-895930
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-895930: (2.071832919s)
--- FAIL: TestMissingContainerUpgrade (142.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (114.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.17.0.1747692760.exe start -p stopped-upgrade-697984 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.17.0.1747692760.exe start -p stopped-upgrade-697984 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m34.700541591s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.17.0.1747692760.exe -p stopped-upgrade-697984 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.17.0.1747692760.exe -p stopped-upgrade-697984 stop: (11.978858321s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-697984 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p stopped-upgrade-697984 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (7.480203886s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-697984] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-697984 in cluster stopped-upgrade-697984
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-697984" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 14:19:31.423563 1309802 out.go:296] Setting OutFile to fd 1 ...
	I1114 14:19:31.423836 1309802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:19:31.423865 1309802 out.go:309] Setting ErrFile to fd 2...
	I1114 14:19:31.423885 1309802 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:19:31.424211 1309802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
	I1114 14:19:31.425113 1309802 out.go:303] Setting JSON to false
	I1114 14:19:31.426475 1309802 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39718,"bootTime":1699931854,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1114 14:19:31.426606 1309802 start.go:138] virtualization:  
	I1114 14:19:31.431135 1309802 out.go:177] * [stopped-upgrade-697984] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1114 14:19:31.433616 1309802 preload.go:306] deleting older generation preload /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-cri-o-overlay-arm64.tar.lz4
	I1114 14:19:31.435905 1309802 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 14:19:31.433785 1309802 notify.go:220] Checking for updates...
	I1114 14:19:31.439510 1309802 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 14:19:31.442890 1309802 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 14:19:31.445029 1309802 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	I1114 14:19:31.446814 1309802 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1114 14:19:31.448627 1309802 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 14:19:31.451159 1309802 config.go:182] Loaded profile config "stopped-upgrade-697984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1114 14:19:31.453600 1309802 out.go:177] * Kubernetes 1.28.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.3
	I1114 14:19:31.455625 1309802 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 14:19:31.488679 1309802 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1114 14:19:31.488774 1309802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 14:19:31.615861 1309802 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-14 14:19:31.605961934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 14:19:31.615997 1309802 docker.go:295] overlay module found
	I1114 14:19:31.618661 1309802 out.go:177] * Using the docker driver based on existing profile
	I1114 14:19:31.620677 1309802 start.go:298] selected driver: docker
	I1114 14:19:31.620701 1309802 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-697984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-697984 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.129 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1114 14:19:31.620819 1309802 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 14:19:31.621480 1309802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 14:19:31.687792 1309802 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-14 14:19:31.678539783 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 14:19:31.688119 1309802 cni.go:84] Creating CNI manager for ""
	I1114 14:19:31.688137 1309802 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1114 14:19:31.688152 1309802 start_flags.go:323] config:
	{Name:stopped-upgrade-697984 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:stopped-upgrade-697984 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.59.129 Port:8443 KubernetesVersion:v1.20.2 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1114 14:19:31.692122 1309802 out.go:177] * Starting control plane node stopped-upgrade-697984 in cluster stopped-upgrade-697984
	I1114 14:19:31.693984 1309802 cache.go:121] Beginning downloading kic base image for docker with crio
	I1114 14:19:31.696250 1309802 out.go:177] * Pulling base image ...
	I1114 14:19:31.698369 1309802 preload.go:132] Checking if preload exists for k8s version v1.20.2 and runtime crio
	I1114 14:19:31.698466 1309802 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon
	I1114 14:19:31.717527 1309802 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e in local docker daemon, skipping pull
	I1114 14:19:31.717551 1309802 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.17@sha256:1cd2e039ec9d418e6380b2fa0280503a72e5b282adea674ee67882f59f4f546e exists in daemon, skipping load
	W1114 14:19:31.764558 1309802 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.2/preloaded-images-k8s-v18-v1.20.2-cri-o-overlay-arm64.tar.lz4 status code: 404
	I1114 14:19:31.764740 1309802 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/stopped-upgrade-697984/config.json ...
	I1114 14:19:31.764857 1309802 cache.go:107] acquiring lock: {Name:mkc3f9e8e80dc5cc581400c732c2f75eea7927c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:19:31.764942 1309802 cache.go:115] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1114 14:19:31.764952 1309802 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 101.956µs
	I1114 14:19:31.764961 1309802 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1114 14:19:31.764971 1309802 cache.go:107] acquiring lock: {Name:mk69a9d8cb51e3aa2e98715b9e677afbd5be8339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:19:31.765000 1309802 cache.go:115] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 exists
	I1114 14:19:31.765000 1309802 cache.go:194] Successfully downloaded all kic artifacts
	I1114 14:19:31.765005 1309802 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.20.2" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2" took 35.396µs
	I1114 14:19:31.765012 1309802 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.20.2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.20.2 succeeded
	I1114 14:19:31.765036 1309802 cache.go:107] acquiring lock: {Name:mkeaa593a19bc596f779e94db679e747fb3e86dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:19:31.765042 1309802 start.go:365] acquiring machines lock for stopped-upgrade-697984: {Name:mk5fb3b052af764fdc52c916cefb2bc9c9a89342 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:19:31.765064 1309802 cache.go:115] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 exists
	I1114 14:19:31.765069 1309802 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.20.2" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2" took 35.593µs
	I1114 14:19:31.765079 1309802 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.20.2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.20.2 succeeded
	I1114 14:19:31.765082 1309802 start.go:369] acquired machines lock for "stopped-upgrade-697984" in 26.715µs
	I1114 14:19:31.765088 1309802 cache.go:107] acquiring lock: {Name:mkadd812de336b999f8e5a8809642906e01f1791 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:19:31.765095 1309802 start.go:96] Skipping create...Using existing machine configuration
	I1114 14:19:31.765114 1309802 cache.go:115] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 exists
	I1114 14:19:31.765117 1309802 fix.go:54] fixHost starting: 
	I1114 14:19:31.765119 1309802 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.20.2" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2" took 32.213µs
	I1114 14:19:31.765126 1309802 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.20.2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.20.2 succeeded
	I1114 14:19:31.765134 1309802 cache.go:107] acquiring lock: {Name:mkf5a96f0221a0606c0d1d34ec321ba5896544c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:19:31.765159 1309802 cache.go:115] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 exists
	I1114 14:19:31.765164 1309802 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.20.2" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2" took 31.244µs
	I1114 14:19:31.765170 1309802 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.20.2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.20.2 succeeded
	I1114 14:19:31.765180 1309802 cache.go:107] acquiring lock: {Name:mkd39b5b1ff28004cfb5f4307bd5b83ed11c8162 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:19:31.765207 1309802 cache.go:115] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 exists
	I1114 14:19:31.765211 1309802 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2" took 34.502µs
	I1114 14:19:31.765217 1309802 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2 succeeded
	I1114 14:19:31.765227 1309802 cache.go:107] acquiring lock: {Name:mk1e00d1f8459fd8271d7a94ac6e5793eb6baf5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:19:31.765252 1309802 cache.go:115] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 exists
	I1114 14:19:31.765256 1309802 cache.go:96] cache image "registry.k8s.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0" took 32.911µs
	I1114 14:19:31.765262 1309802 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.13-0 succeeded
	I1114 14:19:31.765270 1309802 cache.go:107] acquiring lock: {Name:mkc2480d47d595c5e286f3ea4f50224c887ce0f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1114 14:19:31.765296 1309802 cache.go:115] /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 exists
	I1114 14:19:31.765300 1309802 cache.go:96] cache image "registry.k8s.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0" took 30.925µs
	I1114 14:19:31.765306 1309802 cache.go:80] save to tar file registry.k8s.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.7.0 succeeded
	I1114 14:19:31.765312 1309802 cache.go:87] Successfully saved all images to host disk.
	I1114 14:19:31.765380 1309802 cli_runner.go:164] Run: docker container inspect stopped-upgrade-697984 --format={{.State.Status}}
	I1114 14:19:31.783554 1309802 fix.go:102] recreateIfNeeded on stopped-upgrade-697984: state=Stopped err=<nil>
	W1114 14:19:31.783596 1309802 fix.go:128] unexpected machine state, will restart: <nil>
	I1114 14:19:31.787363 1309802 out.go:177] * Restarting existing docker container for "stopped-upgrade-697984" ...
	I1114 14:19:31.789473 1309802 cli_runner.go:164] Run: docker start stopped-upgrade-697984
	I1114 14:19:32.200488 1309802 cli_runner.go:164] Run: docker container inspect stopped-upgrade-697984 --format={{.State.Status}}
	I1114 14:19:32.258374 1309802 kic.go:430] container "stopped-upgrade-697984" state is running.
	I1114 14:19:32.260416 1309802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-697984
	I1114 14:19:32.286021 1309802 profile.go:148] Saving config to /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/stopped-upgrade-697984/config.json ...
	I1114 14:19:32.291148 1309802 machine.go:88] provisioning docker machine ...
	I1114 14:19:32.292370 1309802 ubuntu.go:169] provisioning hostname "stopped-upgrade-697984"
	I1114 14:19:32.292506 1309802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-697984
	I1114 14:19:32.320397 1309802 main.go:141] libmachine: Using SSH client type: native
	I1114 14:19:32.321338 1309802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34463 <nil> <nil>}
	I1114 14:19:32.321486 1309802 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-697984 && echo "stopped-upgrade-697984" | sudo tee /etc/hostname
	I1114 14:19:32.322287 1309802 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1114 14:19:35.477548 1309802 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-697984
	
	I1114 14:19:35.477640 1309802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-697984
	I1114 14:19:35.496516 1309802 main.go:141] libmachine: Using SSH client type: native
	I1114 14:19:35.496969 1309802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34463 <nil> <nil>}
	I1114 14:19:35.496997 1309802 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-697984' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-697984/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-697984' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1114 14:19:35.637344 1309802 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1114 14:19:35.637375 1309802 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17581-1186318/.minikube CaCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17581-1186318/.minikube}
	I1114 14:19:35.637404 1309802 ubuntu.go:177] setting up certificates
	I1114 14:19:35.637415 1309802 provision.go:83] configureAuth start
	I1114 14:19:35.637477 1309802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-697984
	I1114 14:19:35.655735 1309802 provision.go:138] copyHostCerts
	I1114 14:19:35.655796 1309802 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem, removing ...
	I1114 14:19:35.655804 1309802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem
	I1114 14:19:35.655875 1309802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.pem (1082 bytes)
	I1114 14:19:35.655967 1309802 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem, removing ...
	I1114 14:19:35.655976 1309802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem
	I1114 14:19:35.656014 1309802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/cert.pem (1123 bytes)
	I1114 14:19:35.656076 1309802 exec_runner.go:144] found /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem, removing ...
	I1114 14:19:35.656081 1309802 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem
	I1114 14:19:35.656101 1309802 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17581-1186318/.minikube/key.pem (1675 bytes)
	I1114 14:19:35.656142 1309802 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-697984 san=[192.168.59.129 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-697984]
	I1114 14:19:36.781413 1309802 provision.go:172] copyRemoteCerts
	I1114 14:19:36.781492 1309802 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1114 14:19:36.781565 1309802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-697984
	I1114 14:19:36.801270 1309802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34463 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/stopped-upgrade-697984/id_rsa Username:docker}
	I1114 14:19:36.902389 1309802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1114 14:19:36.926095 1309802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1114 14:19:36.950076 1309802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1114 14:19:36.974397 1309802 provision.go:86] duration metric: configureAuth took 1.33696834s
	I1114 14:19:36.974424 1309802 ubuntu.go:193] setting minikube options for container-runtime
	I1114 14:19:36.974613 1309802 config.go:182] Loaded profile config "stopped-upgrade-697984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.20.2
	I1114 14:19:36.974729 1309802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-697984
	I1114 14:19:36.996707 1309802 main.go:141] libmachine: Using SSH client type: native
	I1114 14:19:36.997301 1309802 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3bded0] 0x3c0640 <nil>  [] 0s} 127.0.0.1 34463 <nil> <nil>}
	I1114 14:19:36.997349 1309802 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1114 14:19:37.441084 1309802 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1114 14:19:37.441105 1309802 machine.go:91] provisioned docker machine in 5.148755067s
	I1114 14:19:37.441116 1309802 start.go:300] post-start starting for "stopped-upgrade-697984" (driver="docker")
	I1114 14:19:37.441127 1309802 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1114 14:19:37.441186 1309802 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1114 14:19:37.441232 1309802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-697984
	I1114 14:19:37.478004 1309802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34463 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/stopped-upgrade-697984/id_rsa Username:docker}
	I1114 14:19:37.578846 1309802 ssh_runner.go:195] Run: cat /etc/os-release
	I1114 14:19:37.583475 1309802 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1114 14:19:37.583499 1309802 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1114 14:19:37.583511 1309802 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1114 14:19:37.583518 1309802 info.go:137] Remote host: Ubuntu 20.04.1 LTS
	I1114 14:19:37.583534 1309802 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1186318/.minikube/addons for local assets ...
	I1114 14:19:37.583593 1309802 filesync.go:126] Scanning /home/jenkins/minikube-integration/17581-1186318/.minikube/files for local assets ...
	I1114 14:19:37.583671 1309802 filesync.go:149] local asset: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem -> 11916902.pem in /etc/ssl/certs
	I1114 14:19:37.583780 1309802 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1114 14:19:37.593707 1309802 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/ssl/certs/11916902.pem --> /etc/ssl/certs/11916902.pem (1708 bytes)
	I1114 14:19:37.620591 1309802 start.go:303] post-start completed in 179.460856ms
	I1114 14:19:37.620731 1309802 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 14:19:37.620813 1309802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-697984
	I1114 14:19:37.648641 1309802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34463 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/stopped-upgrade-697984/id_rsa Username:docker}
	I1114 14:19:37.749642 1309802 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1114 14:19:37.758840 1309802 fix.go:56] fixHost completed within 5.993731653s
	I1114 14:19:37.758865 1309802 start.go:83] releasing machines lock for "stopped-upgrade-697984", held for 5.993774147s
	I1114 14:19:37.758936 1309802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-697984
	I1114 14:19:37.785325 1309802 ssh_runner.go:195] Run: cat /version.json
	I1114 14:19:37.785340 1309802 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1114 14:19:37.785384 1309802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-697984
	I1114 14:19:37.785385 1309802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-697984
	I1114 14:19:37.823918 1309802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34463 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/stopped-upgrade-697984/id_rsa Username:docker}
	I1114 14:19:37.842125 1309802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34463 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/stopped-upgrade-697984/id_rsa Username:docker}
	W1114 14:19:37.941165 1309802 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1114 14:19:37.941249 1309802 ssh_runner.go:195] Run: systemctl --version
	I1114 14:19:38.097864 1309802 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1114 14:19:38.249199 1309802 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1114 14:19:38.254945 1309802 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 14:19:38.278606 1309802 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1114 14:19:38.278685 1309802 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1114 14:19:38.307705 1309802 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1114 14:19:38.307768 1309802 start.go:472] detecting cgroup driver to use...
	I1114 14:19:38.307812 1309802 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I1114 14:19:38.307892 1309802 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1114 14:19:38.336297 1309802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1114 14:19:38.349004 1309802 docker.go:203] disabling cri-docker service (if available) ...
	I1114 14:19:38.349072 1309802 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1114 14:19:38.361449 1309802 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1114 14:19:38.373844 1309802 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1114 14:19:38.386409 1309802 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1114 14:19:38.386473 1309802 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1114 14:19:38.489834 1309802 docker.go:219] disabling docker service ...
	I1114 14:19:38.489946 1309802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1114 14:19:38.503492 1309802 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1114 14:19:38.515558 1309802 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1114 14:19:38.618527 1309802 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1114 14:19:38.744684 1309802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1114 14:19:38.757734 1309802 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1114 14:19:38.774852 1309802 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1114 14:19:38.774967 1309802 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1114 14:19:38.788830 1309802 out.go:177] 
	W1114 14:19:38.790954 1309802 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1114 14:19:38.790980 1309802 out.go:239] * 
	* 
	W1114 14:19:38.792024 1309802 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1114 14:19:38.793444 1309802 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.17.0 to HEAD failed: out/minikube-linux-arm64 start -p stopped-upgrade-697984 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (114.16s)

                                                
                                    

Test pass (271/308)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.14
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.3/json-events 13.99
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.09
16 TestDownloadOnly/DeleteAll 0.25
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.16
19 TestBinaryMirror 0.65
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.1
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
25 TestAddons/Setup 169.14
27 TestAddons/parallel/Registry 14.93
29 TestAddons/parallel/InspektorGadget 10.89
30 TestAddons/parallel/MetricsServer 5.98
33 TestAddons/parallel/CSI 35.63
34 TestAddons/parallel/Headlamp 12.42
35 TestAddons/parallel/CloudSpanner 5.65
36 TestAddons/parallel/LocalPath 53.76
37 TestAddons/parallel/NvidiaDevicePlugin 5.74
40 TestAddons/serial/GCPAuth/Namespaces 0.17
41 TestAddons/StoppedEnableDisable 12.43
42 TestCertOptions 39.75
43 TestCertExpiration 255.96
45 TestForceSystemdFlag 41.65
46 TestForceSystemdEnv 42.58
52 TestErrorSpam/setup 31.55
53 TestErrorSpam/start 0.98
54 TestErrorSpam/status 1.22
55 TestErrorSpam/pause 1.93
56 TestErrorSpam/unpause 2.02
57 TestErrorSpam/stop 1.5
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 76.41
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 39.99
64 TestFunctional/serial/KubeContext 0.06
65 TestFunctional/serial/KubectlGetPods 0.11
68 TestFunctional/serial/CacheCmd/cache/add_remote 3.76
69 TestFunctional/serial/CacheCmd/cache/add_local 1.17
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
71 TestFunctional/serial/CacheCmd/cache/list 0.1
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.37
73 TestFunctional/serial/CacheCmd/cache/cache_reload 2.21
74 TestFunctional/serial/CacheCmd/cache/delete 0.16
75 TestFunctional/serial/MinikubeKubectlCmd 0.16
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.17
77 TestFunctional/serial/ExtraConfig 32.28
78 TestFunctional/serial/ComponentHealth 0.11
79 TestFunctional/serial/LogsCmd 1.88
80 TestFunctional/serial/LogsFileCmd 1.89
81 TestFunctional/serial/InvalidService 4.9
83 TestFunctional/parallel/ConfigCmd 0.69
84 TestFunctional/parallel/DashboardCmd 14.15
85 TestFunctional/parallel/DryRun 0.52
86 TestFunctional/parallel/InternationalLanguage 0.23
87 TestFunctional/parallel/StatusCmd 1.17
91 TestFunctional/parallel/ServiceCmdConnect 35.7
92 TestFunctional/parallel/AddonsCmd 0.18
95 TestFunctional/parallel/SSHCmd 0.89
96 TestFunctional/parallel/CpCmd 1.7
98 TestFunctional/parallel/FileSync 0.35
99 TestFunctional/parallel/CertSync 1.95
103 TestFunctional/parallel/NodeLabels 0.09
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.68
107 TestFunctional/parallel/License 0.27
109 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.73
110 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
112 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.4
113 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
114 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
118 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
119 TestFunctional/parallel/ServiceCmd/DeployApp 7.26
120 TestFunctional/parallel/ServiceCmd/List 0.58
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
123 TestFunctional/parallel/ServiceCmd/Format 0.48
124 TestFunctional/parallel/ServiceCmd/URL 0.43
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
126 TestFunctional/parallel/ProfileCmd/profile_list 0.43
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
128 TestFunctional/parallel/MountCmd/any-port 62.45
129 TestFunctional/parallel/MountCmd/specific-port 1.85
130 TestFunctional/parallel/MountCmd/VerifyCleanup 1.53
131 TestFunctional/parallel/Version/short 0.09
132 TestFunctional/parallel/Version/components 0.92
133 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
134 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
135 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
136 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
137 TestFunctional/parallel/ImageCommands/ImageBuild 2.77
138 TestFunctional/parallel/ImageCommands/Setup 2.54
139 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.41
140 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.1
141 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.84
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.93
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.29
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.04
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
149 TestFunctional/delete_addon-resizer_images 0.09
150 TestFunctional/delete_my-image_image 0.02
151 TestFunctional/delete_minikube_cached_images 0.02
155 TestIngressAddonLegacy/StartLegacyK8sCluster 92.2
158 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.66
162 TestJSONOutput/start/Command 49.28
163 TestJSONOutput/start/Audit 0
165 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/pause/Command 0.86
169 TestJSONOutput/pause/Audit 0
171 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/unpause/Command 0.74
175 TestJSONOutput/unpause/Audit 0
177 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/stop/Command 5.96
181 TestJSONOutput/stop/Audit 0
183 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
185 TestErrorJSONOutput 0.27
187 TestKicCustomNetwork/create_custom_network 44.28
188 TestKicCustomNetwork/use_default_bridge_network 36.23
189 TestKicExistingNetwork 34.51
190 TestKicCustomSubnet 37.42
191 TestKicStaticIP 39.43
192 TestMainNoArgs 0.07
193 TestMinikubeProfile 69.96
196 TestMountStart/serial/StartWithMountFirst 9.92
197 TestMountStart/serial/VerifyMountFirst 0.3
198 TestMountStart/serial/StartWithMountSecond 7.41
199 TestMountStart/serial/VerifyMountSecond 0.31
200 TestMountStart/serial/DeleteFirst 1.7
201 TestMountStart/serial/VerifyMountPostDelete 0.29
202 TestMountStart/serial/Stop 1.24
203 TestMountStart/serial/RestartStopped 7.78
204 TestMountStart/serial/VerifyMountPostStop 0.3
207 TestMultiNode/serial/FreshStart2Nodes 95.78
208 TestMultiNode/serial/DeployApp2Nodes 5.67
210 TestMultiNode/serial/AddNode 19.99
211 TestMultiNode/serial/ProfileList 0.4
212 TestMultiNode/serial/CopyFile 11.61
213 TestMultiNode/serial/StopNode 2.39
214 TestMultiNode/serial/StartAfterStop 12.78
215 TestMultiNode/serial/RestartKeepsNodes 120.43
216 TestMultiNode/serial/DeleteNode 5.16
217 TestMultiNode/serial/StopMultiNode 24.17
218 TestMultiNode/serial/RestartMultiNode 89.12
219 TestMultiNode/serial/ValidateNameConflict 36.8
224 TestPreload 149.49
226 TestScheduledStopUnix 105.7
229 TestInsufficientStorage 11.2
232 TestKubernetesUpgrade 420.1
235 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
236 TestNoKubernetes/serial/StartWithK8s 50.44
237 TestNoKubernetes/serial/StartWithStopK8s 10.54
238 TestNoKubernetes/serial/Start 9.95
239 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
240 TestNoKubernetes/serial/ProfileList 0.93
241 TestNoKubernetes/serial/Stop 1.24
242 TestNoKubernetes/serial/StartNoArgs 8.85
243 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.4
244 TestStoppedBinaryUpgrade/Setup 1.26
246 TestStoppedBinaryUpgrade/MinikubeLogs 0.7
255 TestPause/serial/Start 82.57
256 TestPause/serial/SecondStartNoReconfiguration 40.58
257 TestPause/serial/Pause 0.92
258 TestPause/serial/VerifyStatus 0.37
259 TestPause/serial/Unpause 0.77
260 TestPause/serial/PauseAgain 1.21
261 TestPause/serial/DeletePaused 3.4
262 TestPause/serial/VerifyDeletedResources 8.4
270 TestNetworkPlugins/group/false 6.24
275 TestStartStop/group/old-k8s-version/serial/FirstStart 132.53
276 TestStartStop/group/old-k8s-version/serial/DeployApp 9.61
277 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.08
278 TestStartStop/group/old-k8s-version/serial/Stop 12.26
279 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.37
280 TestStartStop/group/old-k8s-version/serial/SecondStart 443.91
282 TestStartStop/group/no-preload/serial/FirstStart 98.05
283 TestStartStop/group/no-preload/serial/DeployApp 9.56
284 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.25
285 TestStartStop/group/no-preload/serial/Stop 12.14
286 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
287 TestStartStop/group/no-preload/serial/SecondStart 630.22
288 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
289 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
290 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.38
291 TestStartStop/group/old-k8s-version/serial/Pause 3.66
293 TestStartStop/group/embed-certs/serial/FirstStart 85.82
294 TestStartStop/group/embed-certs/serial/DeployApp 9.54
295 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.23
296 TestStartStop/group/embed-certs/serial/Stop 12.11
297 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
298 TestStartStop/group/embed-certs/serial/SecondStart 345.44
299 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.03
300 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
301 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.44
302 TestStartStop/group/no-preload/serial/Pause 3.68
304 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 46.13
305 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.49
306 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.28
307 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.12
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
309 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 603.87
310 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 12.03
311 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
312 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.37
313 TestStartStop/group/embed-certs/serial/Pause 3.5
315 TestStartStop/group/newest-cni/serial/FirstStart 48.27
316 TestStartStop/group/newest-cni/serial/DeployApp 0
317 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.11
318 TestStartStop/group/newest-cni/serial/Stop 1.34
319 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
320 TestStartStop/group/newest-cni/serial/SecondStart 30.37
321 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.39
324 TestStartStop/group/newest-cni/serial/Pause 3.47
325 TestNetworkPlugins/group/auto/Start 81.3
326 TestNetworkPlugins/group/auto/KubeletFlags 0.38
327 TestNetworkPlugins/group/auto/NetCatPod 11.35
328 TestNetworkPlugins/group/auto/DNS 0.24
329 TestNetworkPlugins/group/auto/Localhost 0.21
330 TestNetworkPlugins/group/auto/HairPin 0.21
331 TestNetworkPlugins/group/kindnet/Start 81.96
332 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
333 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
334 TestNetworkPlugins/group/kindnet/NetCatPod 10.43
335 TestNetworkPlugins/group/kindnet/DNS 0.22
336 TestNetworkPlugins/group/kindnet/Localhost 0.18
337 TestNetworkPlugins/group/kindnet/HairPin 0.21
338 TestNetworkPlugins/group/calico/Start 71.15
339 TestNetworkPlugins/group/calico/ControllerPod 5.04
340 TestNetworkPlugins/group/calico/KubeletFlags 0.35
341 TestNetworkPlugins/group/calico/NetCatPod 11.41
342 TestNetworkPlugins/group/calico/DNS 0.26
343 TestNetworkPlugins/group/calico/Localhost 0.21
344 TestNetworkPlugins/group/calico/HairPin 0.22
345 TestNetworkPlugins/group/custom-flannel/Start 68.04
346 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
347 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.42
348 TestNetworkPlugins/group/custom-flannel/DNS 0.25
349 TestNetworkPlugins/group/custom-flannel/Localhost 0.25
350 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
351 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.03
352 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
353 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.53
354 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.27
355 TestNetworkPlugins/group/enable-default-cni/Start 99.11
356 TestNetworkPlugins/group/flannel/Start 76.12
357 TestNetworkPlugins/group/flannel/ControllerPod 5.04
358 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
359 TestNetworkPlugins/group/flannel/NetCatPod 10.39
360 TestNetworkPlugins/group/flannel/DNS 0.25
361 TestNetworkPlugins/group/flannel/Localhost 0.22
362 TestNetworkPlugins/group/flannel/HairPin 0.22
363 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.52
364 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.5
365 TestNetworkPlugins/group/enable-default-cni/DNS 0.34
366 TestNetworkPlugins/group/enable-default-cni/Localhost 0.25
367 TestNetworkPlugins/group/enable-default-cni/HairPin 0.34
368 TestNetworkPlugins/group/bridge/Start 50.83
369 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
370 TestNetworkPlugins/group/bridge/NetCatPod 10.36
371 TestNetworkPlugins/group/bridge/DNS 0.24
372 TestNetworkPlugins/group/bridge/Localhost 0.19
373 TestNetworkPlugins/group/bridge/HairPin 0.19
x
+
TestDownloadOnly/v1.16.0/json-events (14.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-924841 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-924841 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (14.140142902s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (14.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-924841
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-924841: exit status 85 (91.554388ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-924841 | jenkins | v1.32.0 | 14 Nov 23 13:33 UTC |          |
	|         | -p download-only-924841        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 13:33:55
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 13:33:55.005852 1191695 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:33:55.006047 1191695 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:33:55.006055 1191695 out.go:309] Setting ErrFile to fd 2...
	I1114 13:33:55.006061 1191695 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:33:55.006626 1191695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
	W1114 13:33:55.007119 1191695 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17581-1186318/.minikube/config/config.json: open /home/jenkins/minikube-integration/17581-1186318/.minikube/config/config.json: no such file or directory
	I1114 13:33:55.007672 1191695 out.go:303] Setting JSON to true
	I1114 13:33:55.008799 1191695 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":36981,"bootTime":1699931854,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1114 13:33:55.008922 1191695 start.go:138] virtualization:  
	I1114 13:33:55.012537 1191695 out.go:97] [download-only-924841] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1114 13:33:55.014992 1191695 out.go:169] MINIKUBE_LOCATION=17581
	W1114 13:33:55.012824 1191695 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball: no such file or directory
	I1114 13:33:55.012874 1191695 notify.go:220] Checking for updates...
	I1114 13:33:55.019139 1191695 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 13:33:55.020884 1191695 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 13:33:55.022728 1191695 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	I1114 13:33:55.024582 1191695 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1114 13:33:55.028450 1191695 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1114 13:33:55.028766 1191695 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 13:33:55.052982 1191695 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1114 13:33:55.053078 1191695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:33:55.133233 1191695 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-11-14 13:33:55.121254925 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:33:55.133441 1191695 docker.go:295] overlay module found
	I1114 13:33:55.135595 1191695 out.go:97] Using the docker driver based on user configuration
	I1114 13:33:55.135652 1191695 start.go:298] selected driver: docker
	I1114 13:33:55.135674 1191695 start.go:902] validating driver "docker" against <nil>
	I1114 13:33:55.135799 1191695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:33:55.204677 1191695 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2023-11-14 13:33:55.194497563 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:33:55.204846 1191695 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1114 13:33:55.205151 1191695 start_flags.go:394] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1114 13:33:55.205305 1191695 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1114 13:33:55.207647 1191695 out.go:169] Using Docker driver with root privileges
	I1114 13:33:55.209713 1191695 cni.go:84] Creating CNI manager for ""
	I1114 13:33:55.209733 1191695 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1114 13:33:55.209746 1191695 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1114 13:33:55.209761 1191695 start_flags.go:323] config:
	{Name:download-only-924841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-924841 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:33:55.212000 1191695 out.go:97] Starting control plane node download-only-924841 in cluster download-only-924841
	I1114 13:33:55.212020 1191695 cache.go:121] Beginning downloading kic base image for docker with crio
	I1114 13:33:55.213949 1191695 out.go:97] Pulling base image ...
	I1114 13:33:55.213980 1191695 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1114 13:33:55.214135 1191695 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon
	I1114 13:33:55.232912 1191695 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 to local cache
	I1114 13:33:55.233886 1191695 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local cache directory
	I1114 13:33:55.233989 1191695 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 to local cache
	I1114 13:33:55.277885 1191695 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	I1114 13:33:55.277932 1191695 cache.go:56] Caching tarball of preloaded images
	I1114 13:33:55.278875 1191695 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1114 13:33:55.281951 1191695 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1114 13:33:55.281982 1191695 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4 ...
	I1114 13:33:55.396617 1191695 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:743cd3b7071469270e4dbdc0d89badaa -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-924841"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (13.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-924841 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-924841 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.991381939s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (13.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-924841
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-924841: exit status 85 (89.093119ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-924841 | jenkins | v1.32.0 | 14 Nov 23 13:33 UTC |          |
	|         | -p download-only-924841        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-924841 | jenkins | v1.32.0 | 14 Nov 23 13:34 UTC |          |
	|         | -p download-only-924841        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/14 13:34:09
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.21.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1114 13:34:09.236525 1191775 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:34:09.236743 1191775 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:34:09.236751 1191775 out.go:309] Setting ErrFile to fd 2...
	I1114 13:34:09.236758 1191775 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:34:09.237035 1191775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
	W1114 13:34:09.237199 1191775 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17581-1186318/.minikube/config/config.json: open /home/jenkins/minikube-integration/17581-1186318/.minikube/config/config.json: no such file or directory
	I1114 13:34:09.237442 1191775 out.go:303] Setting JSON to true
	I1114 13:34:09.238247 1191775 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":36996,"bootTime":1699931854,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1114 13:34:09.238325 1191775 start.go:138] virtualization:  
	I1114 13:34:09.240612 1191775 out.go:97] [download-only-924841] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1114 13:34:09.242904 1191775 out.go:169] MINIKUBE_LOCATION=17581
	I1114 13:34:09.240951 1191775 notify.go:220] Checking for updates...
	I1114 13:34:09.247324 1191775 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 13:34:09.249652 1191775 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 13:34:09.251766 1191775 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	I1114 13:34:09.253690 1191775 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1114 13:34:09.257601 1191775 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1114 13:34:09.258145 1191775 config.go:182] Loaded profile config "download-only-924841": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1114 13:34:09.258212 1191775 start.go:810] api.Load failed for download-only-924841: filestore "download-only-924841": Docker machine "download-only-924841" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1114 13:34:09.258329 1191775 driver.go:378] Setting default libvirt URI to qemu:///system
	W1114 13:34:09.258360 1191775 start.go:810] api.Load failed for download-only-924841: filestore "download-only-924841": Docker machine "download-only-924841" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1114 13:34:09.282192 1191775 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1114 13:34:09.282282 1191775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:34:09.365032 1191775 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-14 13:34:09.353677059 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:34:09.365156 1191775 docker.go:295] overlay module found
	I1114 13:34:09.367210 1191775 out.go:97] Using the docker driver based on existing profile
	I1114 13:34:09.367234 1191775 start.go:298] selected driver: docker
	I1114 13:34:09.367241 1191775 start.go:902] validating driver "docker" against &{Name:download-only-924841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-924841 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:34:09.367414 1191775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:34:09.435020 1191775 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2023-11-14 13:34:09.4246282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archite
cture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:34:09.435507 1191775 cni.go:84] Creating CNI manager for ""
	I1114 13:34:09.435524 1191775 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1114 13:34:09.435533 1191775 start_flags.go:323] config:
	{Name:download-only-924841 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-924841 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPU
s:}
	I1114 13:34:09.437830 1191775 out.go:97] Starting control plane node download-only-924841 in cluster download-only-924841
	I1114 13:34:09.437858 1191775 cache.go:121] Beginning downloading kic base image for docker with crio
	I1114 13:34:09.439985 1191775 out.go:97] Pulling base image ...
	I1114 13:34:09.440035 1191775 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 13:34:09.440207 1191775 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local docker daemon
	I1114 13:34:09.456762 1191775 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 to local cache
	I1114 13:34:09.456901 1191775 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local cache directory
	I1114 13:34:09.456924 1191775 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 in local cache directory, skipping pull
	I1114 13:34:09.456932 1191775 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 exists in cache, skipping pull
	I1114 13:34:09.456940 1191775 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 as a tarball
	I1114 13:34:09.501107 1191775 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	I1114 13:34:09.501133 1191775 cache.go:56] Caching tarball of preloaded images
	I1114 13:34:09.501292 1191775 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1114 13:34:09.503572 1191775 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1114 13:34:09.503600 1191775 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4 ...
	I1114 13:34:09.618151 1191775 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4?checksum=md5:3fdaeefa2c0cc3e046170ba83ccf0cac -> /home/jenkins/minikube-integration/17581-1186318/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-924841"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-924841
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestBinaryMirror (0.65s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-094237 --alsologtostderr --binary-mirror http://127.0.0.1:46157 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-094237" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-094237
--- PASS: TestBinaryMirror (0.65s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-008546
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-008546: exit status 85 (95.124358ms)

                                                
                                                
-- stdout --
	* Profile "addons-008546" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-008546"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-008546
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-008546: exit status 85 (102.131117ms)

                                                
                                                
-- stdout --
	* Profile "addons-008546" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-008546"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (169.14s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-arm64 start -p addons-008546 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-linux-arm64 start -p addons-008546 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (2m49.139325623s)
--- PASS: TestAddons/Setup (169.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 72.905395ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6zxk7" [aab81737-f3a1-4831-aa4b-580e8350b7bc] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.032128361s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-szh9q" [5df13a97-1d8b-408c-8786-cb99aa641c8d] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.013492386s
addons_test.go:339: (dbg) Run:  kubectl --context addons-008546 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-008546 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-008546 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.680433058s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p addons-008546 ip
2023/11/14 13:37:28 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-arm64 -p addons-008546 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.93s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.89s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-z5bv4" [846efcb5-0f1b-4ae9-8370-cb387e9d12c7] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.02135663s
addons_test.go:840: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-008546
addons_test.go:840: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-008546: (5.86490597s)
--- PASS: TestAddons/parallel/InspektorGadget (10.89s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.98s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 3.973368ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-rdnlc" [9c93fef0-fca3-46cd-adf4-ed2436c58e74] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014182469s
addons_test.go:414: (dbg) Run:  kubectl --context addons-008546 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-arm64 -p addons-008546 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.98s)

                                                
                                    
x
+
TestAddons/parallel/CSI (35.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 5.525031ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-008546 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-008546 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-008546 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-008546 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-008546 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f2933999-dc0c-4ca6-8db8-8fa1d843e1bc] Pending
helpers_test.go:344: "task-pv-pod" [f2933999-dc0c-4ca6-8db8-8fa1d843e1bc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [f2933999-dc0c-4ca6-8db8-8fa1d843e1bc] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.027374112s
addons_test.go:583: (dbg) Run:  kubectl --context addons-008546 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-008546 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-008546 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-008546 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-008546 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-008546 delete pod task-pv-pod: (1.118697324s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-008546 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-008546 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-008546 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-008546 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-008546 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-008546 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b947e2fc-732c-4ca5-8b16-0b456b3a930b] Pending
helpers_test.go:344: "task-pv-pod-restore" [b947e2fc-732c-4ca5-8b16-0b456b3a930b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b947e2fc-732c-4ca5-8b16-0b456b3a930b] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.018307722s
addons_test.go:625: (dbg) Run:  kubectl --context addons-008546 delete pod task-pv-pod-restore
addons_test.go:625: (dbg) Done: kubectl --context addons-008546 delete pod task-pv-pod-restore: (1.026679188s)
addons_test.go:629: (dbg) Run:  kubectl --context addons-008546 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-008546 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-arm64 -p addons-008546 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-arm64 -p addons-008546 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.833467938s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-arm64 -p addons-008546 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (35.63s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-008546 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-008546 --alsologtostderr -v=1: (1.379866863s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-zqndv" [32dda207-5847-4622-b476-e9a50349697a] Pending
helpers_test.go:344: "headlamp-777fd4b855-zqndv" [32dda207-5847-4622-b476-e9a50349697a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-zqndv" [32dda207-5847-4622-b476-e9a50349697a] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.036201079s
--- PASS: TestAddons/parallel/Headlamp (12.42s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-tgzbd" [213f5292-d2b4-43d6-be18-483dc0d3c9d4] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.017334988s
addons_test.go:859: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-008546
--- PASS: TestAddons/parallel/CloudSpanner (5.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.76s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-008546 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-008546 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-008546 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-008546 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-008546 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-008546 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-008546 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7c56a27a-1956-4a9c-9e5b-6615e82dac68] Pending
helpers_test.go:344: "test-local-path" [7c56a27a-1956-4a9c-9e5b-6615e82dac68] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7c56a27a-1956-4a9c-9e5b-6615e82dac68] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7c56a27a-1956-4a9c-9e5b-6615e82dac68] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.009732187s
addons_test.go:890: (dbg) Run:  kubectl --context addons-008546 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-arm64 -p addons-008546 ssh "cat /opt/local-path-provisioner/pvc-07268d80-a275-4e12-8808-af7957f493bf_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-008546 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-008546 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-arm64 -p addons-008546 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-arm64 -p addons-008546 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.673510566s)
--- PASS: TestAddons/parallel/LocalPath (53.76s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.74s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-z7lg9" [39138d17-6ce8-4243-924a-592f11b60525] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.101117708s
addons_test.go:954: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-008546
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.74s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-008546 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-008546 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-008546
addons_test.go:171: (dbg) Done: out/minikube-linux-arm64 stop -p addons-008546: (12.104139675s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-008546
addons_test.go:179: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-008546
addons_test.go:184: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-008546
--- PASS: TestAddons/StoppedEnableDisable (12.43s)

                                                
                                    
x
+
TestCertOptions (39.75s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-157546 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-157546 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.845133063s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-157546 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-157546 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-157546 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-157546" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-157546
E1114 14:25:20.964184 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-157546: (2.149729503s)
--- PASS: TestCertOptions (39.75s)

                                                
                                    
x
+
TestCertExpiration (255.96s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-774873 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-774873 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (43.805040668s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-774873 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-774873 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (29.666079772s)
helpers_test.go:175: Cleaning up "cert-expiration-774873" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-774873
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-774873: (2.484536418s)
--- PASS: TestCertExpiration (255.96s)

                                                
                                    
x
+
TestForceSystemdFlag (41.65s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-832181 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1114 14:23:24.011103 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-832181 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.475772123s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-832181 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-832181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-832181
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-832181: (2.731312723s)
--- PASS: TestForceSystemdFlag (41.65s)

                                                
                                    
x
+
TestForceSystemdEnv (42.58s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-098799 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-098799 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.745074292s)
helpers_test.go:175: Cleaning up "force-systemd-env-098799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-098799
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-098799: (2.834754902s)
--- PASS: TestForceSystemdEnv (42.58s)

                                                
                                    
x
+
TestErrorSpam/setup (31.55s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-621379 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-621379 --driver=docker  --container-runtime=crio
E1114 13:42:14.368742 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
E1114 13:42:14.375182 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
E1114 13:42:14.385430 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
E1114 13:42:14.405655 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
E1114 13:42:14.445883 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
E1114 13:42:14.526142 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
E1114 13:42:14.686489 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
E1114 13:42:15.007542 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
E1114 13:42:15.648382 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
E1114 13:42:16.928593 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
E1114 13:42:19.488796 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-621379 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-621379 --driver=docker  --container-runtime=crio: (31.554547s)
--- PASS: TestErrorSpam/setup (31.55s)

                                                
                                    
x
+
TestErrorSpam/start (0.98s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-621379 --log_dir /tmp/nospam-621379 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-621379 --log_dir /tmp/nospam-621379 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-621379 --log_dir /tmp/nospam-621379 start --dry-run
--- PASS: TestErrorSpam/start (0.98s)

                                                
                                    
x
+
TestErrorSpam/status (1.22s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-621379 --log_dir /tmp/nospam-621379 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-621379 --log_dir /tmp/nospam-621379 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-621379 --log_dir /tmp/nospam-621379 status
--- PASS: TestErrorSpam/status (1.22s)

                                                
                                    
x
+
TestErrorSpam/pause (1.93s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-621379 --log_dir /tmp/nospam-621379 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-621379 --log_dir /tmp/nospam-621379 pause
E1114 13:42:24.609618 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-621379 --log_dir /tmp/nospam-621379 pause
--- PASS: TestErrorSpam/pause (1.93s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.02s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-621379 --log_dir /tmp/nospam-621379 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-621379 --log_dir /tmp/nospam-621379 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-621379 --log_dir /tmp/nospam-621379 unpause
--- PASS: TestErrorSpam/unpause (2.02s)

                                                
                                    
x
+
TestErrorSpam/stop (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-621379 --log_dir /tmp/nospam-621379 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-621379 --log_dir /tmp/nospam-621379 stop: (1.271403884s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-621379 --log_dir /tmp/nospam-621379 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-621379 --log_dir /tmp/nospam-621379 stop
--- PASS: TestErrorSpam/stop (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17581-1186318/.minikube/files/etc/test/nested/copy/1191690/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.41s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-943397 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1114 13:42:34.849896 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
E1114 13:42:55.330090 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
E1114 13:43:36.290721 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-943397 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m16.411810091s)
--- PASS: TestFunctional/serial/StartWithProxy (76.41s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-943397 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-943397 --alsologtostderr -v=8: (39.991495173s)
functional_test.go:659: soft start took 39.992201007s for "functional-943397" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-943397 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-943397 cache add registry.k8s.io/pause:3.1: (1.208499739s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-943397 cache add registry.k8s.io/pause:3.3: (1.31186457s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-943397 cache add registry.k8s.io/pause:latest: (1.23832345s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-943397 /tmp/TestFunctionalserialCacheCmdcacheadd_local2594666962/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 cache add minikube-local-cache-test:functional-943397
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 cache delete minikube-local-cache-test:functional-943397
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-943397
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-943397 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (358.58814ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-943397 cache reload: (1.15446428s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 kubectl -- --context functional-943397 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-943397 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (32.28s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-943397 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1114 13:44:58.211866 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-943397 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (32.282194021s)
functional_test.go:757: restart took 32.282345297s for "functional-943397" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (32.28s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-943397 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-943397 logs: (1.878835012s)
--- PASS: TestFunctional/serial/LogsCmd (1.88s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 logs --file /tmp/TestFunctionalserialLogsFileCmd1431813757/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-943397 logs --file /tmp/TestFunctionalserialLogsFileCmd1431813757/001/logs.txt: (1.884792787s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.89s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.9s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-943397 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-943397
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-943397: exit status 115 (600.97818ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31093 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-943397 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.90s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-943397 config get cpus: exit status 14 (144.317627ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-943397 config get cpus: exit status 14 (109.757869ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-943397 --alsologtostderr -v=1]
2023/11/14 13:47:39 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-943397 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1217806: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.15s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-943397 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-943397 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (225.338496ms)

                                                
                                                
-- stdout --
	* [functional-943397] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 13:47:25.483621 1217588 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:47:25.483821 1217588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:47:25.483849 1217588 out.go:309] Setting ErrFile to fd 2...
	I1114 13:47:25.483871 1217588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:47:25.484192 1217588 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
	I1114 13:47:25.484606 1217588 out.go:303] Setting JSON to false
	I1114 13:47:25.485584 1217588 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":37792,"bootTime":1699931854,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1114 13:47:25.485715 1217588 start.go:138] virtualization:  
	I1114 13:47:25.489083 1217588 out.go:177] * [functional-943397] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1114 13:47:25.490811 1217588 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 13:47:25.493077 1217588 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 13:47:25.490924 1217588 notify.go:220] Checking for updates...
	I1114 13:47:25.496810 1217588 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 13:47:25.498459 1217588 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	I1114 13:47:25.500453 1217588 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1114 13:47:25.503567 1217588 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 13:47:25.506151 1217588 config.go:182] Loaded profile config "functional-943397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 13:47:25.506748 1217588 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 13:47:25.533062 1217588 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1114 13:47:25.533159 1217588 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:47:25.623455 1217588 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-14 13:47:25.613734032 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:47:25.623579 1217588 docker.go:295] overlay module found
	I1114 13:47:25.625528 1217588 out.go:177] * Using the docker driver based on existing profile
	I1114 13:47:25.627161 1217588 start.go:298] selected driver: docker
	I1114 13:47:25.627194 1217588 start.go:902] validating driver "docker" against &{Name:functional-943397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-943397 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:47:25.627290 1217588 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 13:47:25.629450 1217588 out.go:177] 
	W1114 13:47:25.631325 1217588 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1114 13:47:25.633363 1217588 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-943397 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-943397 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-943397 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (231.937543ms)

                                                
                                                
-- stdout --
	* [functional-943397] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 13:47:25.262125 1217548 out.go:296] Setting OutFile to fd 1 ...
	I1114 13:47:25.262359 1217548 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:47:25.262387 1217548 out.go:309] Setting ErrFile to fd 2...
	I1114 13:47:25.262406 1217548 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 13:47:25.262800 1217548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
	I1114 13:47:25.263244 1217548 out.go:303] Setting JSON to false
	I1114 13:47:25.264217 1217548 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":37792,"bootTime":1699931854,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1114 13:47:25.264315 1217548 start.go:138] virtualization:  
	I1114 13:47:25.267711 1217548 out.go:177] * [functional-943397] minikube v1.32.0 sur Ubuntu 20.04 (arm64)
	I1114 13:47:25.269967 1217548 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 13:47:25.272068 1217548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 13:47:25.270101 1217548 notify.go:220] Checking for updates...
	I1114 13:47:25.274082 1217548 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 13:47:25.275835 1217548 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	I1114 13:47:25.277494 1217548 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1114 13:47:25.279148 1217548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 13:47:25.281171 1217548 config.go:182] Loaded profile config "functional-943397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 13:47:25.281846 1217548 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 13:47:25.306945 1217548 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1114 13:47:25.307044 1217548 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 13:47:25.398276 1217548 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-14 13:47:25.388304452 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 13:47:25.398397 1217548 docker.go:295] overlay module found
	I1114 13:47:25.400394 1217548 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1114 13:47:25.402405 1217548 start.go:298] selected driver: docker
	I1114 13:47:25.402436 1217548 start.go:902] validating driver "docker" against &{Name:functional-943397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1699485386-17565@sha256:bc7ff092e883443bfc1c9fb6a45d08012db3c0fc68e914887b7f16ccdefcab24 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:functional-943397 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1114 13:47:25.402533 1217548 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 13:47:25.404865 1217548 out.go:177] 
	W1114 13:47:25.406523 1217548 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1114 13:47:25.408078 1217548 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (35.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-943397 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-943397 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-q7v82" [b0c525d4-d4c3-4e7a-a052-51ac8a05f4ce] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-q7v82" [b0c525d4-d4c3-4e7a-a052-51ac8a05f4ce] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 35.015321727s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:31597
functional_test.go:1674: http://192.168.49.2:31597: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-7799dfb7c6-q7v82

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31597
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (35.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh -n functional-943397 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 cp functional-943397:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2413867350/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh -n functional-943397 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1191690/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh "sudo cat /etc/test/nested/copy/1191690/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1191690.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh "sudo cat /etc/ssl/certs/1191690.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1191690.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh "sudo cat /usr/share/ca-certificates/1191690.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/11916902.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh "sudo cat /etc/ssl/certs/11916902.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/11916902.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh "sudo cat /usr/share/ca-certificates/11916902.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-943397 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-943397 ssh "sudo systemctl is-active docker": exit status 1 (346.614215ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-943397 ssh "sudo systemctl is-active containerd": exit status 1 (331.158282ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-943397 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-943397 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-943397 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-943397 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1215211: os: process already finished
helpers_test.go:508: unable to kill pid 1215099: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-943397 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-943397 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e569ab2d-1ff3-47e9-9603-d19457ae3ab6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [e569ab2d-1ff3-47e9-9603-d19457ae3ab6] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.009953786s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-943397 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.252.61 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-943397 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-943397 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-943397 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-c4qs9" [8d0385f1-c4ac-42d8-b041-9b709558f6ef] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-c4qs9" [8d0385f1-c4ac-42d8-b041-9b709558f6ef] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.013461585s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 service list -o json
functional_test.go:1493: Took "588.560947ms" to run "out/minikube-linux-arm64 -p functional-943397 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:32737
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:32737
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1314: Took "357.752974ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1328: Took "71.773679ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1365: Took "361.372676ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1378: Took "80.694517ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (62.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-943397 /tmp/TestFunctionalparallelMountCmdany-port1766483855/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1699969578183542743" to /tmp/TestFunctionalparallelMountCmdany-port1766483855/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1699969578183542743" to /tmp/TestFunctionalparallelMountCmdany-port1766483855/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1699969578183542743" to /tmp/TestFunctionalparallelMountCmdany-port1766483855/001/test-1699969578183542743
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-943397 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (401.929356ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 14 13:46 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 14 13:46 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 14 13:46 test-1699969578183542743
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh cat /mount-9p/test-1699969578183542743
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-943397 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4a1a75ea-ca8b-4c4c-9b7b-015b3617afde] Pending
helpers_test.go:344: "busybox-mount" [4a1a75ea-ca8b-4c4c-9b7b-015b3617afde] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E1114 13:47:14.368485 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [4a1a75ea-ca8b-4c4c-9b7b-015b3617afde] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4a1a75ea-ca8b-4c4c-9b7b-015b3617afde] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 59.017174633s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-943397 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-943397 /tmp/TestFunctionalparallelMountCmdany-port1766483855/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (62.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-943397 /tmp/TestFunctionalparallelMountCmdspecific-port996133655/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-943397 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (426.543224ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-943397 /tmp/TestFunctionalparallelMountCmdspecific-port996133655/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-943397 ssh "sudo umount -f /mount-9p": exit status 1 (329.802603ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-943397 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-943397 /tmp/TestFunctionalparallelMountCmdspecific-port996133655/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-943397 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1050659217/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-943397 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1050659217/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-943397 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1050659217/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-943397 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-943397 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1050659217/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-943397 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1050659217/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-943397 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1050659217/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-943397 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-943397
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-943397 image ls --format short --alsologtostderr:
I1114 13:48:04.225700 1219148 out.go:296] Setting OutFile to fd 1 ...
I1114 13:48:04.225879 1219148 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:48:04.225891 1219148 out.go:309] Setting ErrFile to fd 2...
I1114 13:48:04.225897 1219148 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:48:04.226196 1219148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
I1114 13:48:04.226894 1219148 config.go:182] Loaded profile config "functional-943397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1114 13:48:04.227043 1219148 config.go:182] Loaded profile config "functional-943397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1114 13:48:04.227638 1219148 cli_runner.go:164] Run: docker container inspect functional-943397 --format={{.State.Status}}
I1114 13:48:04.247123 1219148 ssh_runner.go:195] Run: systemctl --version
I1114 13:48:04.247182 1219148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-943397
I1114 13:48:04.267807 1219148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34289 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/functional-943397/id_rsa Username:docker}
I1114 13:48:04.366918 1219148 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-943397 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| docker.io/library/nginx                 | alpine             | aae348c9fbd40 | 50.2MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 9cdd6470f48c8 | 182MB  |
| registry.k8s.io/kube-proxy              | v1.28.3            | a5dd5cdd6d3ef | 69.9MB |
| registry.k8s.io/kube-scheduler          | v1.28.3            | 42a4e73724daa | 59.2MB |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/coredns/coredns         | v1.10.1            | 97e04611ad434 | 51.4MB |
| registry.k8s.io/kube-controller-manager | v1.28.3            | 8276439b4f237 | 117MB  |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | 04b4eaa3d3db8 | 60.9MB |
| gcr.io/google-containers/addon-resizer  | functional-943397  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/pause                   | 3.9                | 829e9de338bd5 | 520kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| localhost/my-image                      | functional-943397  | 2b599ad423ea2 | 1.64MB |
| registry.k8s.io/kube-apiserver          | v1.28.3            | 537e9a59ee2fd | 121MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-943397 image ls --format table --alsologtostderr:
I1114 13:48:07.798390 1219458 out.go:296] Setting OutFile to fd 1 ...
I1114 13:48:07.798644 1219458 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:48:07.798679 1219458 out.go:309] Setting ErrFile to fd 2...
I1114 13:48:07.798713 1219458 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:48:07.799038 1219458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
I1114 13:48:07.799856 1219458 config.go:182] Loaded profile config "functional-943397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1114 13:48:07.800199 1219458 config.go:182] Loaded profile config "functional-943397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1114 13:48:07.800856 1219458 cli_runner.go:164] Run: docker container inspect functional-943397 --format={{.State.Status}}
I1114 13:48:07.825618 1219458 ssh_runner.go:195] Run: systemctl --version
I1114 13:48:07.825673 1219458 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-943397
I1114 13:48:07.855501 1219458 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34289 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/functional-943397/id_rsa Username:docker}
I1114 13:48:07.954561 1219458 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-943397 image ls --format json --alsologtostderr:
[{"id":"aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb40f6011d45e9b","repoDigests":["docker.io/library/nginx@sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b","docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77"],"repoTags":["docker.io/library/nginx:alpine"],"size":"50212152"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-943397"],"size":"34114467"},{"id":"8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707","registry.k8s.io/kube-controller-manager@sha256:c53671810fed4fd98b482a8e32f105585826221a4657ebd6181bc20becd3f0be"],"repoTags":["registry.k8s.io/kube-controller-manager:v1
.28.3"],"size":"117252916"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"2b599ad423ea2b8e979e2d74170871e049bb6fc4835f93568ed82c6f71c63254","repoDigests":["localhost/my-image@sha256:000b505789a851a9dd72afb93824c711dd1929194a46b51e3aa2cd2e83fab5e1"],"repoTags":["localhost/my-image:functional-943397"],"size":"1640226"},{"id":"42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725","registry.k8s.io/kube-scheduler@sha256:c0c5cdf040306fccc833bfa847f74be0f6ea5c828ba6c2a443210f68aa9bdd7c"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"59188020"},{"id":"8057
e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"6f220f1763d60e7dccd66cb4a28edac7bb2f55a9f0d79cffce90f587e3dfcc18","repoDigests":["docker.io/library/
de95a37151144db65a26194f91ef8417b08ff65e6079e35bfd57c8266cda0040-tmp@sha256:342b5efa70d99eaf94cf23d9e0b17ebed37a50473f425b86f44b32d77567fa90"],"repoTags":[],"size":"1637644"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":["registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3","registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"182203183"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7055e7e0041a953d3fcec5950b88e8608ce09489f775dc0a8bd62a3300fd3ffa","registry.k8s.io/kube-apiserver@sha256:8db46adefb0
f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"121054158"},{"id":"a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd","repoDigests":["registry.k8s.io/kube-proxy@sha256:0228eb00239c0ea5f627a6191fc192f4e20606b57419ce9e2e0c1588f960b483","registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"69926807"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2"],
"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"60867618"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":["registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9a
cc5599105","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51393451"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"520014"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-943397 image ls --format json --alsologtostderr:
I1114 13:48:07.547732 1219433 out.go:296] Setting OutFile to fd 1 ...
I1114 13:48:07.547978 1219433 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:48:07.547991 1219433 out.go:309] Setting ErrFile to fd 2...
I1114 13:48:07.547997 1219433 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:48:07.548286 1219433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
I1114 13:48:07.549043 1219433 config.go:182] Loaded profile config "functional-943397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1114 13:48:07.549225 1219433 config.go:182] Loaded profile config "functional-943397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1114 13:48:07.549797 1219433 cli_runner.go:164] Run: docker container inspect functional-943397 --format={{.State.Status}}
I1114 13:48:07.568631 1219433 ssh_runner.go:195] Run: systemctl --version
I1114 13:48:07.568684 1219433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-943397
I1114 13:48:07.586322 1219433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34289 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/functional-943397/id_rsa Username:docker}
I1114 13:48:07.682737 1219433 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-943397 image ls --format yaml --alsologtostderr:
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 04b4eaa3d3db8abea4b9ea4d10a0926ebb31db5a31b673aa1cf7a4b3af4add26
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:f61a1c916e587322444cab4e745a66c8bed6c30208e4dae28d5a1d18c070adb2
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "60867618"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: a5dd5cdd6d3ef8058b7fbcecacbcee7f522fa8b9f3bb53bac6570e62ba2cbdbd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0228eb00239c0ea5f627a6191fc192f4e20606b57419ce9e2e0c1588f960b483
- registry.k8s.io/kube-proxy@sha256:73a9f275e1fa5f0b9ae744914764847c2c4fdc66e9e528d67dea70007f9a6072
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "69926807"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: aae348c9fbd40035f9fc24e2c9ccb9ac0a8977a3f3441a997bb40f6011d45e9b
repoDigests:
- docker.io/library/nginx@sha256:b7537eea6ffa4f00aac311f16654b50736328eb370208c68b6649a97b7a2724b
- docker.io/library/nginx@sha256:db353d0f0c479c91bd15e01fc68ed0f33d9c4c52f3415e63332c3d0bf7a4bb77
repoTags:
- docker.io/library/nginx:alpine
size: "50212152"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-943397
size: "34114467"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:3ec98b8452dc8ae265a6917dfb81587ac78849e520d5dbba6de524851d20eca6
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "520014"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 537e9a59ee2fdef3cc7f5ebd14f9c4c77047176fca2bc5599db196217efeb5d7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7055e7e0041a953d3fcec5950b88e8608ce09489f775dc0a8bd62a3300fd3ffa
- registry.k8s.io/kube-apiserver@sha256:8db46adefb0f251da210504e2ce268c36a5a7c630667418ea4601f63c9057a2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "121054158"
- id: 42a4e73724daac2ee0c96eeeb79b9cf5f242fc3927ccfdc4df63b58140097314
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2cfaab2fe5e5937bc37f3d05f3eb7a4912a981ab8375f1d9c2c3190b259d1725
- registry.k8s.io/kube-scheduler@sha256:c0c5cdf040306fccc833bfa847f74be0f6ea5c828ba6c2a443210f68aa9bdd7c
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "59188020"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:74130b944396a0b0ca9af923ee6e03b08a35d98fc1bbaef4e35cf9acc5599105
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51393451"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests:
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
- registry.k8s.io/etcd@sha256:e60789d18cc66486e6db4094383f9732280092f07a1f5455ecbe11d404c8e48b
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "182203183"
- id: 8276439b4f237dda1f7820b0fcef600bb5662e441aa00e7b7c45843e60f04a16
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:640661231facded984f698e79315bceb5391b04e5159662e940e6e5ab2098707
- registry.k8s.io/kube-controller-manager@sha256:c53671810fed4fd98b482a8e32f105585826221a4657ebd6181bc20becd3f0be
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "117252916"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-943397 image ls --format yaml --alsologtostderr:
I1114 13:48:04.503467 1219175 out.go:296] Setting OutFile to fd 1 ...
I1114 13:48:04.503785 1219175 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:48:04.503827 1219175 out.go:309] Setting ErrFile to fd 2...
I1114 13:48:04.503855 1219175 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:48:04.504292 1219175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
I1114 13:48:04.505441 1219175 config.go:182] Loaded profile config "functional-943397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1114 13:48:04.505726 1219175 config.go:182] Loaded profile config "functional-943397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1114 13:48:04.506457 1219175 cli_runner.go:164] Run: docker container inspect functional-943397 --format={{.State.Status}}
I1114 13:48:04.526082 1219175 ssh_runner.go:195] Run: systemctl --version
I1114 13:48:04.526143 1219175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-943397
I1114 13:48:04.546427 1219175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34289 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/functional-943397/id_rsa Username:docker}
I1114 13:48:04.647546 1219175 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-943397 ssh pgrep buildkitd: exit status 1 (315.424241ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 image build -t localhost/my-image:functional-943397 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-943397 image build -t localhost/my-image:functional-943397 testdata/build --alsologtostderr: (2.174518221s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-arm64 -p functional-943397 image build -t localhost/my-image:functional-943397 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 6f220f1763d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-943397
--> 2b599ad423e
Successfully tagged localhost/my-image:functional-943397
2b599ad423ea2b8e979e2d74170871e049bb6fc4835f93568ed82c6f71c63254
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-943397 image build -t localhost/my-image:functional-943397 testdata/build --alsologtostderr:
I1114 13:48:05.093233 1219250 out.go:296] Setting OutFile to fd 1 ...
I1114 13:48:05.095010 1219250 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:48:05.095028 1219250 out.go:309] Setting ErrFile to fd 2...
I1114 13:48:05.095035 1219250 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1114 13:48:05.095443 1219250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
I1114 13:48:05.096184 1219250 config.go:182] Loaded profile config "functional-943397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1114 13:48:05.096920 1219250 config.go:182] Loaded profile config "functional-943397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
I1114 13:48:05.097508 1219250 cli_runner.go:164] Run: docker container inspect functional-943397 --format={{.State.Status}}
I1114 13:48:05.116463 1219250 ssh_runner.go:195] Run: systemctl --version
I1114 13:48:05.116519 1219250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-943397
I1114 13:48:05.140112 1219250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34289 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/functional-943397/id_rsa Username:docker}
I1114 13:48:05.238202 1219250 build_images.go:151] Building image from path: /tmp/build.1981617330.tar
I1114 13:48:05.238275 1219250 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1114 13:48:05.248943 1219250 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1981617330.tar
I1114 13:48:05.253639 1219250 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1981617330.tar: stat -c "%s %y" /var/lib/minikube/build/build.1981617330.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1981617330.tar': No such file or directory
I1114 13:48:05.253671 1219250 ssh_runner.go:362] scp /tmp/build.1981617330.tar --> /var/lib/minikube/build/build.1981617330.tar (3072 bytes)
I1114 13:48:05.284084 1219250 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1981617330
I1114 13:48:05.294980 1219250 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1981617330 -xf /var/lib/minikube/build/build.1981617330.tar
I1114 13:48:05.306368 1219250 crio.go:297] Building image: /var/lib/minikube/build/build.1981617330
I1114 13:48:05.306452 1219250 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-943397 /var/lib/minikube/build/build.1981617330 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1114 13:48:07.165388 1219250 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-943397 /var/lib/minikube/build/build.1981617330 --cgroup-manager=cgroupfs: (1.858906437s)
I1114 13:48:07.165465 1219250 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1981617330
I1114 13:48:07.176071 1219250 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1981617330.tar
I1114 13:48:07.186608 1219250 build_images.go:207] Built localhost/my-image:functional-943397 from /tmp/build.1981617330.tar
I1114 13:48:07.186637 1219250 build_images.go:123] succeeded building to: functional-943397
I1114 13:48:07.186642 1219250 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
E1114 13:47:42.052043 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.514159292s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-943397
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 image load --daemon gcr.io/google-containers/addon-resizer:functional-943397 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-943397 image load --daemon gcr.io/google-containers/addon-resizer:functional-943397 --alsologtostderr: (4.144172427s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 image load --daemon gcr.io/google-containers/addon-resizer:functional-943397 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-arm64 -p functional-943397 image load --daemon gcr.io/google-containers/addon-resizer:functional-943397 --alsologtostderr: (2.811369798s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.873479459s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-943397
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 image load --daemon gcr.io/google-containers/addon-resizer:functional-943397 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-943397 image load --daemon gcr.io/google-containers/addon-resizer:functional-943397 --alsologtostderr: (3.667928641s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 image save gcr.io/google-containers/addon-resizer:functional-943397 /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 image rm gcr.io/google-containers/addon-resizer:functional-943397 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-arm64 -p functional-943397 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/addon-resizer-save.tar --alsologtostderr: (1.013222716s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-943397
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 image save --daemon gcr.io/google-containers/addon-resizer:functional-943397 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-arm64 -p functional-943397 image save --daemon gcr.io/google-containers/addon-resizer:functional-943397 --alsologtostderr: (1.002261211s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-943397
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-943397 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-943397
--- PASS: TestFunctional/delete_addon-resizer_images (0.09s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-943397
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-943397
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (92.2s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-arm64 start -p ingress-addon-legacy-814110 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-arm64 start -p ingress-addon-legacy-814110 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m32.19988801s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (92.20s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.66s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-arm64 -p ingress-addon-legacy-814110 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.66s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49.28s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-975977 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-975977 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (49.282357375s)
--- PASS: TestJSONOutput/start/Command (49.28s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.86s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-975977 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.86s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-975977 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.96s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-975977 --output=json --user=testUser
E1114 13:58:37.412309 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-975977 --output=json --user=testUser: (5.959052847s)
--- PASS: TestJSONOutput/stop/Command (5.96s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.27s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-636578 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-636578 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.909692ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ff482753-b283-4224-8436-393dad8f246c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-636578] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eec7ac55-7192-4ebf-b1e0-1390eb99fd15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17581"}}
	{"specversion":"1.0","id":"ae53107d-aaa3-4abc-adab-bdb08fe2ed1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2ed66eff-1dc9-45cd-b833-817ca4319881","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig"}}
	{"specversion":"1.0","id":"45bd2bbd-d001-4664-a8f4-14d5da809f81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube"}}
	{"specversion":"1.0","id":"342e7958-b76f-4356-abf8-674f13360b56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4674274c-7d0d-44d8-aa11-67fae71ee11e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"73017091-9c38-43a5-9975-f2e75d469f66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-636578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-636578
--- PASS: TestErrorJSONOutput (0.27s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.28s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-084628 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-084628 --network=: (42.102629345s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-084628" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-084628
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-084628: (2.150779076s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.28s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.23s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-590777 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-590777 --network=bridge: (34.110309818s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-590777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-590777
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-590777: (2.094782903s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.23s)

                                                
                                    
x
+
TestKicExistingNetwork (34.51s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-225008 --network=existing-network
E1114 14:00:20.963178 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-225008 --network=existing-network: (32.326749924s)
helpers_test.go:175: Cleaning up "existing-network-225008" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-225008
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-225008: (2.025819089s)
--- PASS: TestKicExistingNetwork (34.51s)

                                                
                                    
x
+
TestKicCustomSubnet (37.42s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-690730 --subnet=192.168.60.0/24
E1114 14:01:09.660662 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
E1114 14:01:09.665914 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
E1114 14:01:09.676123 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
E1114 14:01:09.696324 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
E1114 14:01:09.736536 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
E1114 14:01:09.816793 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
E1114 14:01:09.977128 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
E1114 14:01:10.297624 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
E1114 14:01:10.938460 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
E1114 14:01:12.218679 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
E1114 14:01:14.779800 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-690730 --subnet=192.168.60.0/24: (35.229790823s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-690730 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-690730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-690730
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-690730: (2.162654221s)
--- PASS: TestKicCustomSubnet (37.42s)

                                                
                                    
x
+
TestKicStaticIP (39.43s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-858015 --static-ip=192.168.200.200
E1114 14:01:19.900434 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
E1114 14:01:30.140703 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
E1114 14:01:50.620926 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-858015 --static-ip=192.168.200.200: (37.028174391s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-858015 ip
helpers_test.go:175: Cleaning up "static-ip-858015" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-858015
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-858015: (2.217700811s)
--- PASS: TestKicStaticIP (39.43s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (69.96s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-438712 --driver=docker  --container-runtime=crio
E1114 14:02:14.368238 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-438712 --driver=docker  --container-runtime=crio: (30.744850512s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-441305 --driver=docker  --container-runtime=crio
E1114 14:02:31.581214 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-441305 --driver=docker  --container-runtime=crio: (33.708508983s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-438712
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-441305
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-441305" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-441305
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-441305: (2.063084083s)
helpers_test.go:175: Cleaning up "first-438712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-438712
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-438712: (2.071271998s)
--- PASS: TestMinikubeProfile (69.96s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-557960 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-557960 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.923494414s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-557960 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-559954 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-559954 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.405388738s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-559954 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-557960 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-557960 --alsologtostderr -v=5: (1.698714642s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-559954 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-559954
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-559954: (1.239777118s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.78s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-559954
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-559954: (6.780730085s)
--- PASS: TestMountStart/serial/RestartStopped (7.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-559954 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (95.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-683928 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1114 14:03:53.501396 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-arm64 start -p multinode-683928 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m35.211787437s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (95.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-683928 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-683928 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-683928 -- rollout status deployment/busybox: (3.372250354s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-683928 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-683928 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-683928 -- exec busybox-5bc68d56bd-rl6d4 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-683928 -- exec busybox-5bc68d56bd-vf6zm -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-683928 -- exec busybox-5bc68d56bd-rl6d4 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-683928 -- exec busybox-5bc68d56bd-vf6zm -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-683928 -- exec busybox-5bc68d56bd-rl6d4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-683928 -- exec busybox-5bc68d56bd-vf6zm -- nslookup kubernetes.default.svc.cluster.local
E1114 14:05:20.963641 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-683928 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-683928 -v 3 --alsologtostderr: (19.213167973s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.99s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 cp testdata/cp-test.txt multinode-683928:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 ssh -n multinode-683928 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 cp multinode-683928:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1980800561/001/cp-test_multinode-683928.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 ssh -n multinode-683928 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 cp multinode-683928:/home/docker/cp-test.txt multinode-683928-m02:/home/docker/cp-test_multinode-683928_multinode-683928-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 ssh -n multinode-683928 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 ssh -n multinode-683928-m02 "sudo cat /home/docker/cp-test_multinode-683928_multinode-683928-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 cp multinode-683928:/home/docker/cp-test.txt multinode-683928-m03:/home/docker/cp-test_multinode-683928_multinode-683928-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 ssh -n multinode-683928 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 ssh -n multinode-683928-m03 "sudo cat /home/docker/cp-test_multinode-683928_multinode-683928-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 cp testdata/cp-test.txt multinode-683928-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 ssh -n multinode-683928-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 cp multinode-683928-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1980800561/001/cp-test_multinode-683928-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 ssh -n multinode-683928-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 cp multinode-683928-m02:/home/docker/cp-test.txt multinode-683928:/home/docker/cp-test_multinode-683928-m02_multinode-683928.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 ssh -n multinode-683928-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 ssh -n multinode-683928 "sudo cat /home/docker/cp-test_multinode-683928-m02_multinode-683928.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 cp multinode-683928-m02:/home/docker/cp-test.txt multinode-683928-m03:/home/docker/cp-test_multinode-683928-m02_multinode-683928-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 ssh -n multinode-683928-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 ssh -n multinode-683928-m03 "sudo cat /home/docker/cp-test_multinode-683928-m02_multinode-683928-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 cp testdata/cp-test.txt multinode-683928-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 ssh -n multinode-683928-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 cp multinode-683928-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1980800561/001/cp-test_multinode-683928-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 ssh -n multinode-683928-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 cp multinode-683928-m03:/home/docker/cp-test.txt multinode-683928:/home/docker/cp-test_multinode-683928-m03_multinode-683928.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 ssh -n multinode-683928-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 ssh -n multinode-683928 "sudo cat /home/docker/cp-test_multinode-683928-m03_multinode-683928.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 cp multinode-683928-m03:/home/docker/cp-test.txt multinode-683928-m02:/home/docker/cp-test_multinode-683928-m03_multinode-683928-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 ssh -n multinode-683928-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 ssh -n multinode-683928-m02 "sudo cat /home/docker/cp-test_multinode-683928-m03_multinode-683928-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-arm64 -p multinode-683928 node stop m03: (1.258753766s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-683928 status: exit status 7 (566.293423ms)

                                                
                                                
-- stdout --
	multinode-683928
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-683928-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-683928-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-683928 status --alsologtostderr: exit status 7 (565.637033ms)

                                                
                                                
-- stdout --
	multinode-683928
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-683928-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-683928-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 14:05:59.306671 1265450 out.go:296] Setting OutFile to fd 1 ...
	I1114 14:05:59.306947 1265450 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:05:59.306981 1265450 out.go:309] Setting ErrFile to fd 2...
	I1114 14:05:59.307004 1265450 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:05:59.307336 1265450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
	I1114 14:05:59.307569 1265450 out.go:303] Setting JSON to false
	I1114 14:05:59.307658 1265450 mustload.go:65] Loading cluster: multinode-683928
	I1114 14:05:59.307705 1265450 notify.go:220] Checking for updates...
	I1114 14:05:59.308257 1265450 config.go:182] Loaded profile config "multinode-683928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 14:05:59.308298 1265450 status.go:255] checking status of multinode-683928 ...
	I1114 14:05:59.308992 1265450 cli_runner.go:164] Run: docker container inspect multinode-683928 --format={{.State.Status}}
	I1114 14:05:59.330326 1265450 status.go:330] multinode-683928 host status = "Running" (err=<nil>)
	I1114 14:05:59.330373 1265450 host.go:66] Checking if "multinode-683928" exists ...
	I1114 14:05:59.330761 1265450 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-683928
	I1114 14:05:59.350110 1265450 host.go:66] Checking if "multinode-683928" exists ...
	I1114 14:05:59.350455 1265450 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 14:05:59.350510 1265450 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928
	I1114 14:05:59.374700 1265450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34354 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928/id_rsa Username:docker}
	I1114 14:05:59.471314 1265450 ssh_runner.go:195] Run: systemctl --version
	I1114 14:05:59.476976 1265450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:05:59.490510 1265450 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 14:05:59.565553 1265450 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:55 SystemTime:2023-11-14 14:05:59.555698935 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 14:05:59.566178 1265450 kubeconfig.go:92] found "multinode-683928" server: "https://192.168.58.2:8443"
	I1114 14:05:59.566214 1265450 api_server.go:166] Checking apiserver status ...
	I1114 14:05:59.566256 1265450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1114 14:05:59.579687 1265450 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1230/cgroup
	I1114 14:05:59.591420 1265450 api_server.go:182] apiserver freezer: "2:freezer:/docker/95780648ef67ea835cd8638bb1ad39dc71166d07c9ffffe13531b9d9cc13b597/crio/crio-90797a3e0e930936a9e28981c3fdc1d9d6af3d8a0a27c6cf8c6fc70e4d788473"
	I1114 14:05:59.591492 1265450 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/95780648ef67ea835cd8638bb1ad39dc71166d07c9ffffe13531b9d9cc13b597/crio/crio-90797a3e0e930936a9e28981c3fdc1d9d6af3d8a0a27c6cf8c6fc70e4d788473/freezer.state
	I1114 14:05:59.602367 1265450 api_server.go:204] freezer state: "THAWED"
	I1114 14:05:59.602396 1265450 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1114 14:05:59.611526 1265450 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1114 14:05:59.611556 1265450 status.go:421] multinode-683928 apiserver status = Running (err=<nil>)
	I1114 14:05:59.611567 1265450 status.go:257] multinode-683928 status: &{Name:multinode-683928 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1114 14:05:59.611584 1265450 status.go:255] checking status of multinode-683928-m02 ...
	I1114 14:05:59.611918 1265450 cli_runner.go:164] Run: docker container inspect multinode-683928-m02 --format={{.State.Status}}
	I1114 14:05:59.630005 1265450 status.go:330] multinode-683928-m02 host status = "Running" (err=<nil>)
	I1114 14:05:59.630030 1265450 host.go:66] Checking if "multinode-683928-m02" exists ...
	I1114 14:05:59.630352 1265450 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-683928-m02
	I1114 14:05:59.648884 1265450 host.go:66] Checking if "multinode-683928-m02" exists ...
	I1114 14:05:59.649195 1265450 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1114 14:05:59.649240 1265450 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-683928-m02
	I1114 14:05:59.667675 1265450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34359 SSHKeyPath:/home/jenkins/minikube-integration/17581-1186318/.minikube/machines/multinode-683928-m02/id_rsa Username:docker}
	I1114 14:05:59.763792 1265450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1114 14:05:59.778124 1265450 status.go:257] multinode-683928-m02 status: &{Name:multinode-683928-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1114 14:05:59.778164 1265450 status.go:255] checking status of multinode-683928-m03 ...
	I1114 14:05:59.778511 1265450 cli_runner.go:164] Run: docker container inspect multinode-683928-m03 --format={{.State.Status}}
	I1114 14:05:59.797708 1265450 status.go:330] multinode-683928-m03 host status = "Stopped" (err=<nil>)
	I1114 14:05:59.797733 1265450 status.go:343] host is not running, skipping remaining checks
	I1114 14:05:59.797740 1265450 status.go:257] multinode-683928-m03 status: &{Name:multinode-683928-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 node start m03 --alsologtostderr
E1114 14:06:09.661173 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-arm64 -p multinode-683928 node start m03 --alsologtostderr: (11.925956435s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (120.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-683928
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-683928
E1114 14:06:37.341794 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-683928: (25.073642241s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-683928 --wait=true -v=8 --alsologtostderr
E1114 14:06:44.010590 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 14:07:14.368329 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-arm64 start -p multinode-683928 --wait=true -v=8 --alsologtostderr: (1m35.186106234s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-683928
--- PASS: TestMultiNode/serial/RestartKeepsNodes (120.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-arm64 -p multinode-683928 node delete m03: (4.40109s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p multinode-683928 stop: (23.933020386s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-683928 status: exit status 7 (118.825713ms)

                                                
                                                
-- stdout --
	multinode-683928
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-683928-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-683928 status --alsologtostderr: exit status 7 (114.28545ms)

                                                
                                                
-- stdout --
	multinode-683928
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-683928-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 14:08:42.296443 1273767 out.go:296] Setting OutFile to fd 1 ...
	I1114 14:08:42.296724 1273767 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:08:42.296757 1273767 out.go:309] Setting ErrFile to fd 2...
	I1114 14:08:42.296778 1273767 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:08:42.297108 1273767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
	I1114 14:08:42.297336 1273767 out.go:303] Setting JSON to false
	I1114 14:08:42.297428 1273767 mustload.go:65] Loading cluster: multinode-683928
	I1114 14:08:42.297510 1273767 notify.go:220] Checking for updates...
	I1114 14:08:42.297926 1273767 config.go:182] Loaded profile config "multinode-683928": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 14:08:42.297969 1273767 status.go:255] checking status of multinode-683928 ...
	I1114 14:08:42.298582 1273767 cli_runner.go:164] Run: docker container inspect multinode-683928 --format={{.State.Status}}
	I1114 14:08:42.317668 1273767 status.go:330] multinode-683928 host status = "Stopped" (err=<nil>)
	I1114 14:08:42.317723 1273767 status.go:343] host is not running, skipping remaining checks
	I1114 14:08:42.317731 1273767 status.go:257] multinode-683928 status: &{Name:multinode-683928 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1114 14:08:42.317758 1273767 status.go:255] checking status of multinode-683928-m02 ...
	I1114 14:08:42.318072 1273767 cli_runner.go:164] Run: docker container inspect multinode-683928-m02 --format={{.State.Status}}
	I1114 14:08:42.338330 1273767 status.go:330] multinode-683928-m02 host status = "Stopped" (err=<nil>)
	I1114 14:08:42.338355 1273767 status.go:343] host is not running, skipping remaining checks
	I1114 14:08:42.338362 1273767 status.go:257] multinode-683928-m02 status: &{Name:multinode-683928-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (89.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-683928 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:354: (dbg) Done: out/minikube-linux-arm64 start -p multinode-683928 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m28.281518423s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-arm64 -p multinode-683928 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (89.12s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-683928
multinode_test.go:452: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-683928-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-683928-m02 --driver=docker  --container-runtime=crio: exit status 14 (102.683603ms)

                                                
                                                
-- stdout --
	* [multinode-683928-m02] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-683928-m02' is duplicated with machine name 'multinode-683928-m02' in profile 'multinode-683928'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-683928-m03 --driver=docker  --container-runtime=crio
E1114 14:10:20.963199 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-arm64 start -p multinode-683928-m03 --driver=docker  --container-runtime=crio: (34.217272598s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-683928
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-683928: exit status 80 (378.054752ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-683928
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-683928-m03 already exists in multinode-683928-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-683928-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-683928-m03: (2.02069685s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.80s)

                                                
                                    
x
+
TestPreload (149.49s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-368514 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1114 14:11:09.661150 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
E1114 14:12:14.368749 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-368514 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m24.318843041s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-368514 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-368514 image pull gcr.io/k8s-minikube/busybox: (1.820517576s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-368514
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-368514: (5.93573922s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-368514 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-368514 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (54.713457755s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-368514 image list
helpers_test.go:175: Cleaning up "test-preload-368514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-368514
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-368514: (2.41856855s)
--- PASS: TestPreload (149.49s)

                                                
                                    
x
+
TestScheduledStopUnix (105.7s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-942516 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-942516 --memory=2048 --driver=docker  --container-runtime=crio: (29.136667254s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-942516 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-942516 -n scheduled-stop-942516
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-942516 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-942516 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-942516 -n scheduled-stop-942516
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-942516
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-942516 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-942516
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-942516: exit status 7 (100.546911ms)

                                                
                                                
-- stdout --
	scheduled-stop-942516
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-942516 -n scheduled-stop-942516
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-942516 -n scheduled-stop-942516: exit status 7 (92.810035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-942516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-942516
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-942516: (4.712736249s)
--- PASS: TestScheduledStopUnix (105.70s)

                                                
                                    
x
+
TestInsufficientStorage (11.2s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-925949 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
E1114 14:15:17.413697 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-925949 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (8.554485431s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"09958e63-e258-4374-af66-47591866d194","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-925949] minikube v1.32.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"19874c46-749c-4a8b-83f5-d16ddc25e709","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17581"}}
	{"specversion":"1.0","id":"922a38b1-e3e0-4a2a-9309-088fcee75de2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7314aa93-7dbf-4dd3-94cb-fc466a5015e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig"}}
	{"specversion":"1.0","id":"99844fc5-4ff2-4529-b8cd-3ca45e194996","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube"}}
	{"specversion":"1.0","id":"6ddf5b74-7ddf-4b6a-be64-70db9a6faaaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"bf707bbd-2855-4689-bb6e-1abb780d2482","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d79c1167-1290-45df-a2a5-ca6707b9f4bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1832b802-a388-4716-ad40-bb436bed6e49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7c4a5548-d5c1-4ac0-ad2c-798e7abb3ab4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f971a081-56a4-4710-9417-82f47d64ea97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4d6dc0d3-867a-4684-972c-fa0a1af00b96","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-925949 in cluster insufficient-storage-925949","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2957702c-a24f-40e2-9fff-4d4cada7ab5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c467853a-b0dc-4d56-a7ec-f67166333add","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"405196db-f793-4b99-b7c2-6b9e6d3f10e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-925949 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-925949 --output=json --layout=cluster: exit status 7 (338.029924ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-925949","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-925949","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 14:15:18.933237 1290307 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-925949" does not appear in /home/jenkins/minikube-integration/17581-1186318/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-925949 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-925949 --output=json --layout=cluster: exit status 7 (326.944909ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-925949","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-925949","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1114 14:15:19.260193 1290361 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-925949" does not appear in /home/jenkins/minikube-integration/17581-1186318/kubeconfig
	E1114 14:15:19.273384 1290361 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/insufficient-storage-925949/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-925949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-925949
E1114 14:15:20.963196 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-925949: (1.983534429s)
--- PASS: TestInsufficientStorage (11.20s)

                                                
                                    
x
+
TestKubernetesUpgrade (420.1s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-418193 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-418193 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m9.620412611s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-418193
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-418193: (2.604303949s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-418193 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-418193 status --format={{.Host}}: exit status 7 (103.575857ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-418193 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-418193 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m52.653526992s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-418193 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-418193 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-418193 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (171.196436ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-418193] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-418193
	    minikube start -p kubernetes-upgrade-418193 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4181932 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-418193 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-418193 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-418193 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (52.191453997s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-418193" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-418193
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-418193: (2.563425343s)
--- PASS: TestKubernetesUpgrade (420.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-151108 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-151108 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (104.82538ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-151108] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (50.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-151108 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-151108 --driver=docker  --container-runtime=crio: (49.916027138s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-151108 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (50.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (10.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-151108 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-151108 --no-kubernetes --driver=docker  --container-runtime=crio: (7.723396527s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-151108 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-151108 status -o json: exit status 2 (396.291277ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-151108","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-151108
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-151108: (2.419882747s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (10.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-151108 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-151108 --no-kubernetes --driver=docker  --container-runtime=crio: (9.952682862s)
--- PASS: TestNoKubernetes/serial/Start (9.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-151108 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-151108 "sudo systemctl is-active --quiet service kubelet": exit status 1 (326.243498ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-151108
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-151108: (1.240706278s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-151108 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-151108 --driver=docker  --container-runtime=crio: (8.851406403s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-151108 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-151108 "sudo systemctl is-active --quiet service kubelet": exit status 1 (397.01995ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.26s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-697984
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.70s)

                                                
                                    
x
+
TestPause/serial/Start (82.57s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-639837 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1114 14:21:09.661229 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
E1114 14:22:14.368669 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-639837 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m22.567380571s)
--- PASS: TestPause/serial/Start (82.57s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.58s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-639837 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-639837 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.56608242s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.58s)

                                                
                                    
x
+
TestPause/serial/Pause (0.92s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-639837 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.92s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.37s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-639837 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-639837 --output=json --layout=cluster: exit status 2 (366.503956ms)

                                                
                                                
-- stdout --
	{"Name":"pause-639837","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-639837","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.37s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-639837 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.77s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.21s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-639837 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-639837 --alsologtostderr -v=5: (1.211678017s)
--- PASS: TestPause/serial/PauseAgain (1.21s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.4s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-639837 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-639837 --alsologtostderr -v=5: (3.398430413s)
--- PASS: TestPause/serial/DeletePaused (3.40s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (8.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (8.335754199s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-639837
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-639837: exit status 1 (20.856509ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-639837: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (8.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (6.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-127726 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-127726 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (221.703937ms)

                                                
                                                
-- stdout --
	* [false-127726] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=17581
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1114 14:23:52.176524 1330321 out.go:296] Setting OutFile to fd 1 ...
	I1114 14:23:52.176682 1330321 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:23:52.176692 1330321 out.go:309] Setting ErrFile to fd 2...
	I1114 14:23:52.176698 1330321 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1114 14:23:52.176965 1330321 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17581-1186318/.minikube/bin
	I1114 14:23:52.177361 1330321 out.go:303] Setting JSON to false
	I1114 14:23:52.178488 1330321 start.go:128] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39979,"bootTime":1699931854,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1114 14:23:52.178568 1330321 start.go:138] virtualization:  
	I1114 14:23:52.181419 1330321 out.go:177] * [false-127726] minikube v1.32.0 on Ubuntu 20.04 (arm64)
	I1114 14:23:52.183637 1330321 out.go:177]   - MINIKUBE_LOCATION=17581
	I1114 14:23:52.185555 1330321 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1114 14:23:52.183782 1330321 notify.go:220] Checking for updates...
	I1114 14:23:52.187883 1330321 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17581-1186318/kubeconfig
	I1114 14:23:52.189898 1330321 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17581-1186318/.minikube
	I1114 14:23:52.191742 1330321 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1114 14:23:52.193566 1330321 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1114 14:23:52.196059 1330321 config.go:182] Loaded profile config "force-systemd-flag-832181": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1114 14:23:52.196182 1330321 driver.go:378] Setting default libvirt URI to qemu:///system
	I1114 14:23:52.223484 1330321 docker.go:122] docker version: linux-24.0.7:Docker Engine - Community
	I1114 14:23:52.223582 1330321 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1114 14:23:52.310707 1330321 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:45 SystemTime:2023-11-14 14:23:52.300616777 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1049-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8215035904 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:24.0.7 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523 Expected:61f9fd88f79f081d64d6fa3bb1a0dc71ec870523} RuncCommit:{ID:v1.1.9-0-gccaecfc Expected:v1.1.9-0-gccaecfc} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0]] Warnings:<nil>}}
	I1114 14:23:52.310829 1330321 docker.go:295] overlay module found
	I1114 14:23:52.313593 1330321 out.go:177] * Using the docker driver based on user configuration
	I1114 14:23:52.316064 1330321 start.go:298] selected driver: docker
	I1114 14:23:52.316083 1330321 start.go:902] validating driver "docker" against <nil>
	I1114 14:23:52.316097 1330321 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1114 14:23:52.318956 1330321 out.go:177] 
	W1114 14:23:52.321884 1330321 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1114 14:23:52.323908 1330321 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-127726 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-127726

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-127726

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-127726

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-127726

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-127726

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-127726

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-127726

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-127726

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-127726

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-127726

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-127726

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-127726" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-127726" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17581-1186318/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 14 Nov 2023 14:23:54 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-832181
contexts:
- context:
cluster: force-systemd-flag-832181
extensions:
- extension:
last-update: Tue, 14 Nov 2023 14:23:54 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: force-systemd-flag-832181
name: force-systemd-flag-832181
current-context: force-systemd-flag-832181
kind: Config
preferences: {}
users:
- name: force-systemd-flag-832181
user:
client-certificate: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/force-systemd-flag-832181/client.crt
client-key: /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/force-systemd-flag-832181/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-127726

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-127726"

                                                
                                                
----------------------- debugLogs end: false-127726 [took: 5.84163751s] --------------------------------
helpers_test.go:175: Cleaning up "false-127726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-127726
--- PASS: TestNetworkPlugins/group/false (6.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (132.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-512277 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E1114 14:26:09.660520 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
E1114 14:27:14.368757 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-512277 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m12.534023123s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (132.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-512277 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0b4c264b-3a5c-4a99-b76d-3c7b9a4eb7d1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0b4c264b-3a5c-4a99-b76d-3c7b9a4eb7d1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.037538491s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-512277 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-512277 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-512277 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-512277 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-512277 --alsologtostderr -v=3: (12.261129626s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-512277 -n old-k8s-version-512277
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-512277 -n old-k8s-version-512277: exit status 7 (167.00216ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-512277 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (443.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-512277 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-512277 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (7m23.460637645s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-512277 -n old-k8s-version-512277
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (443.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (98.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-029193 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-029193 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (1m38.051596642s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (98.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-029193 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d53aa20e-19cd-4a2e-9aa2-1f1c578f1365] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d53aa20e-19cd-4a2e-9aa2-1f1c578f1365] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.029379561s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-029193 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-029193 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-029193 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.08927061s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-029193 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-029193 --alsologtostderr -v=3
E1114 14:30:20.964398 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-029193 --alsologtostderr -v=3: (12.139843798s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-029193 -n no-preload-029193
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-029193 -n no-preload-029193: exit status 7 (93.60481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-029193 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (630.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-029193 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1114 14:31:09.661047 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
E1114 14:31:57.413912 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
E1114 14:32:14.368856 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
E1114 14:34:12.702956 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
E1114 14:35:20.963225 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-029193 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (10m29.79662987s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-029193 -n no-preload-029193
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (630.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-cdght" [a2895ba3-1600-403b-94c0-22f7a158e3bb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.024478726s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-cdght" [a2895ba3-1600-403b-94c0-22f7a158e3bb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009968562s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-512277 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p old-k8s-version-512277 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20220726-ed811e41
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-512277 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-512277 -n old-k8s-version-512277
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-512277 -n old-k8s-version-512277: exit status 2 (389.469101ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-512277 -n old-k8s-version-512277
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-512277 -n old-k8s-version-512277: exit status 2 (438.211853ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-512277 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-512277 -n old-k8s-version-512277
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-512277 -n old-k8s-version-512277
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-838616 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1114 14:36:09.661121 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-838616 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (1m25.821445789s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-838616 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8a69233d-a1bc-4686-abf2-e500d5355415] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8a69233d-a1bc-4686-abf2-e500d5355415] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.034379851s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-838616 exec busybox -- /bin/sh -c "ulimit -n"
E1114 14:37:14.368324 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-838616 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-838616 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.106016968s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-838616 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-838616 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-838616 --alsologtostderr -v=3: (12.112856912s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-838616 -n embed-certs-838616
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-838616 -n embed-certs-838616: exit status 7 (98.236392ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-838616 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (345.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-838616 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1114 14:37:34.970103 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
E1114 14:37:34.975378 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
E1114 14:37:34.985626 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
E1114 14:37:35.005893 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
E1114 14:37:35.046066 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
E1114 14:37:35.126714 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
E1114 14:37:35.287288 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
E1114 14:37:35.607790 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
E1114 14:37:36.248434 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
E1114 14:37:37.529138 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
E1114 14:37:40.090188 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
E1114 14:37:45.210729 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
E1114 14:37:55.451579 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
E1114 14:38:15.931797 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
E1114 14:38:56.892664 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
E1114 14:40:04.012238 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 14:40:18.813636 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
E1114 14:40:20.963775 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-838616 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (5m44.956997625s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-838616 -n embed-certs-838616
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (345.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rwkd2" [673c41df-bc8a-46c0-9d5b-930c3d9422fc] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.029007796s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-rwkd2" [673c41df-bc8a-46c0-9d5b-930c3d9422fc] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010400653s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-029193 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p no-preload-029193 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-029193 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-029193 -n no-preload-029193
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-029193 -n no-preload-029193: exit status 2 (399.75154ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-029193 -n no-preload-029193
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-029193 -n no-preload-029193: exit status 2 (369.328897ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-029193 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-029193 -n no-preload-029193
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-029193 -n no-preload-029193
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-002100 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1114 14:41:09.661514 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-002100 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (46.125923282s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-002100 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5105eb98-433c-45a8-ac8b-28e0f27fde2e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5105eb98-433c-45a8-ac8b-28e0f27fde2e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.028959251s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-002100 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-002100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-002100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.161143376s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-002100 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-002100 --alsologtostderr -v=3
E1114 14:42:14.368278 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-002100 --alsologtostderr -v=3: (12.118387079s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-002100 -n default-k8s-diff-port-002100
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-002100 -n default-k8s-diff-port-002100: exit status 7 (101.032338ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-002100 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (603.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-002100 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
E1114 14:42:34.969847 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
E1114 14:43:02.654009 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-002100 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (10m3.287439789s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-002100 -n default-k8s-diff-port-002100
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (603.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-srp97" [0cd54b76-030c-470d-9d02-decd175f5924] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-srp97" [0cd54b76-030c-470d-9d02-decd175f5924] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.025985789s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (12.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-srp97" [0cd54b76-030c-470d-9d02-decd175f5924] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010814744s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-838616 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p embed-certs-838616 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-838616 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-838616 -n embed-certs-838616
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-838616 -n embed-certs-838616: exit status 2 (373.50052ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-838616 -n embed-certs-838616
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-838616 -n embed-certs-838616: exit status 2 (366.830688ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-838616 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-838616 -n embed-certs-838616
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-838616 -n embed-certs-838616
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (48.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-242569 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-242569 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (48.265280602s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (48.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-242569 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-242569 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.106851288s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-242569 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-242569 --alsologtostderr -v=3: (1.335085067s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-242569 -n newest-cni-242569
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-242569 -n newest-cni-242569: exit status 7 (93.459028ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-242569 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-242569 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-242569 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.3: (29.938076298s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-242569 -n newest-cni-242569
E1114 14:44:58.829573 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/no-preload-029193/client.crt: no such file or directory
E1114 14:44:58.835211 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/no-preload-029193/client.crt: no such file or directory
E1114 14:44:58.845411 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/no-preload-029193/client.crt: no such file or directory
E1114 14:44:58.865919 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/no-preload-029193/client.crt: no such file or directory
E1114 14:44:58.907401 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/no-preload-029193/client.crt: no such file or directory
E1114 14:44:58.987667 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/no-preload-029193/client.crt: no such file or directory
E1114 14:44:59.148126 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/no-preload-029193/client.crt: no such file or directory
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p newest-cni-242569 "sudo crictl images -o json"
E1114 14:44:59.468283 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/no-preload-029193/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-242569 --alsologtostderr -v=1
E1114 14:45:00.112409 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/no-preload-029193/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-242569 -n newest-cni-242569
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-242569 -n newest-cni-242569: exit status 2 (403.040898ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-242569 -n newest-cni-242569
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-242569 -n newest-cni-242569: exit status 2 (400.233547ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-242569 --alsologtostderr -v=1
E1114 14:45:01.393386 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/no-preload-029193/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-242569 -n newest-cni-242569
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-242569 -n newest-cni-242569
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (81.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-127726 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1114 14:45:09.073814 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/no-preload-029193/client.crt: no such file or directory
E1114 14:45:19.314830 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/no-preload-029193/client.crt: no such file or directory
E1114 14:45:20.963471 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 14:45:39.795155 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/no-preload-029193/client.crt: no such file or directory
E1114 14:46:09.660766 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
E1114 14:46:20.756350 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/no-preload-029193/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-127726 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m21.303946016s)
--- PASS: TestNetworkPlugins/group/auto/Start (81.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-127726 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-127726 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hbx8n" [4bec0958-02db-404b-85ce-5192d3f8825e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hbx8n" [4bec0958-02db-404b-85ce-5192d3f8825e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.011386505s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-127726 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-127726 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-127726 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (81.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-127726 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1114 14:47:14.368390 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
E1114 14:47:34.970024 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
E1114 14:47:42.677473 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/no-preload-029193/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-127726 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m21.961172576s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (81.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-n92jm" [fb7823aa-243f-4b03-b9e9-342fbe408248] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.035419615s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-127726 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-127726 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-njjlb" [149a04e7-c7d0-4208-89dd-914203f7ce69] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-njjlb" [149a04e7-c7d0-4208-89dd-914203f7ce69] Running
E1114 14:48:37.414239 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.015390502s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-127726 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-127726 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-127726 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-127726 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1114 14:49:58.830007 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/no-preload-029193/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-127726 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m11.147716215s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-57hrk" [56ddca33-2d12-4330-9032-6139a42e9b3e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.035811697s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-127726 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-127726 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-k8zck" [3c0a187d-0885-4adc-b1a5-929d17da986d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1114 14:50:20.963420 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-k8zck" [3c0a187d-0885-4adc-b1a5-929d17da986d] Running
E1114 14:50:26.517710 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/no-preload-029193/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.013150422s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-127726 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-127726 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-127726 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (68.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-127726 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1114 14:51:09.660698 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/ingress-addon-legacy-814110/client.crt: no such file or directory
E1114 14:51:27.432474 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/auto-127726/client.crt: no such file or directory
E1114 14:51:27.437782 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/auto-127726/client.crt: no such file or directory
E1114 14:51:27.448031 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/auto-127726/client.crt: no such file or directory
E1114 14:51:27.468289 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/auto-127726/client.crt: no such file or directory
E1114 14:51:27.508593 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/auto-127726/client.crt: no such file or directory
E1114 14:51:27.588887 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/auto-127726/client.crt: no such file or directory
E1114 14:51:27.749220 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/auto-127726/client.crt: no such file or directory
E1114 14:51:28.069813 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/auto-127726/client.crt: no such file or directory
E1114 14:51:28.710143 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/auto-127726/client.crt: no such file or directory
E1114 14:51:29.990789 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/auto-127726/client.crt: no such file or directory
E1114 14:51:32.551952 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/auto-127726/client.crt: no such file or directory
E1114 14:51:37.672513 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/auto-127726/client.crt: no such file or directory
E1114 14:51:47.913285 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/auto-127726/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-127726 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m8.039448772s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (68.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-127726 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-127726 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-fk2rq" [879ccca7-5663-4658-b902-be8410a08550] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1114 14:52:08.394365 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/auto-127726/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-fk2rq" [879ccca7-5663-4658-b902-be8410a08550] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.012261619s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-127726 exec deployment/netcat -- nslookup kubernetes.default
E1114 14:52:14.368212 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/addons-008546/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-127726 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-127726 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9p65c" [03cfd350-4dcd-420b-9061-3320190f5ec2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.031398249s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-9p65c" [03cfd350-4dcd-420b-9061-3320190f5ec2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015498975s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-002100 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 ssh -p default-k8s-diff-port-002100 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-002100 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-002100 --alsologtostderr -v=1: (1.249472946s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-002100 -n default-k8s-diff-port-002100
E1114 14:52:34.969977 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-002100 -n default-k8s-diff-port-002100: exit status 2 (495.113906ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-002100 -n default-k8s-diff-port-002100
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-002100 -n default-k8s-diff-port-002100: exit status 2 (484.12179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-002100 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-002100 --alsologtostderr -v=1: (1.156859329s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-002100 -n default-k8s-diff-port-002100
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-002100 -n default-k8s-diff-port-002100
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.27s)
E1114 14:55:14.435050 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/calico-127726/client.crt: no such file or directory
E1114 14:55:14.440329 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/calico-127726/client.crt: no such file or directory
E1114 14:55:14.450591 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/calico-127726/client.crt: no such file or directory
E1114 14:55:14.470830 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/calico-127726/client.crt: no such file or directory
E1114 14:55:14.511063 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/calico-127726/client.crt: no such file or directory
E1114 14:55:14.591330 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/calico-127726/client.crt: no such file or directory
E1114 14:55:14.751733 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/calico-127726/client.crt: no such file or directory
E1114 14:55:15.072229 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/calico-127726/client.crt: no such file or directory
E1114 14:55:15.713195 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/calico-127726/client.crt: no such file or directory
E1114 14:55:16.994206 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/calico-127726/client.crt: no such file or directory
E1114 14:55:19.554390 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/calico-127726/client.crt: no such file or directory
E1114 14:55:20.963416 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/functional-943397/client.crt: no such file or directory
E1114 14:55:24.674970 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/calico-127726/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (99.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-127726 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-127726 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m39.109213213s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (99.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (76.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-127726 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1114 14:52:49.354613 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/auto-127726/client.crt: no such file or directory
E1114 14:53:23.646056 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/kindnet-127726/client.crt: no such file or directory
E1114 14:53:23.651184 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/kindnet-127726/client.crt: no such file or directory
E1114 14:53:23.661423 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/kindnet-127726/client.crt: no such file or directory
E1114 14:53:23.681697 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/kindnet-127726/client.crt: no such file or directory
E1114 14:53:23.721916 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/kindnet-127726/client.crt: no such file or directory
E1114 14:53:23.802211 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/kindnet-127726/client.crt: no such file or directory
E1114 14:53:23.962724 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/kindnet-127726/client.crt: no such file or directory
E1114 14:53:24.283554 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/kindnet-127726/client.crt: no such file or directory
E1114 14:53:24.924844 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/kindnet-127726/client.crt: no such file or directory
E1114 14:53:26.205690 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/kindnet-127726/client.crt: no such file or directory
E1114 14:53:28.766794 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/kindnet-127726/client.crt: no such file or directory
E1114 14:53:33.887495 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/kindnet-127726/client.crt: no such file or directory
E1114 14:53:44.127672 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/kindnet-127726/client.crt: no such file or directory
E1114 14:53:58.014351 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/old-k8s-version-512277/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-127726 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m16.11702622s)
--- PASS: TestNetworkPlugins/group/flannel/Start (76.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-k9886" [d8d78e5e-635b-4dd2-9b98-d74a0b05df5a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.034714367s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-127726 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-127726 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nx8n2" [a46e48bd-cdb4-46c6-9423-20d60f2913e1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1114 14:54:04.608193 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/kindnet-127726/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-nx8n2" [a46e48bd-cdb4-46c6-9423-20d60f2913e1] Running
E1114 14:54:11.275259 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/auto-127726/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.0162699s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-127726 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-127726 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-127726 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-127726 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-127726 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-9fxvj" [3500c470-d99f-41ba-b86f-5224332adcfd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-9fxvj" [3500c470-d99f-41ba-b86f-5224332adcfd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.011367472s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-127726 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-127726 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-127726 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (50.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-127726 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1114 14:54:45.571660 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/kindnet-127726/client.crt: no such file or directory
E1114 14:54:58.832661 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/no-preload-029193/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-127726 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (50.828829873s)
--- PASS: TestNetworkPlugins/group/bridge/Start (50.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-127726 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-127726 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xjwtg" [31a8ba2a-ebcc-4cbc-98d0-ad706778e91c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1114 14:55:34.915751 1191690 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17581-1186318/.minikube/profiles/calico-127726/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-xjwtg" [31a8ba2a-ebcc-4cbc-98d0-ad706778e91c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.011557959s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-127726 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-127726 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-127726 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.19s)

                                                
                                    

Test skip (29/308)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.64s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:225: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-182153 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:237: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-182153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-182153
--- SKIP: TestDownloadOnlyKic (0.64s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-937285" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-937285
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (6.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-127726 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-127726

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-127726

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-127726

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-127726

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-127726

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-127726

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-127726

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-127726

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-127726

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-127726

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-127726

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-127726" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-127726" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-127726

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-127726"

                                                
                                                
----------------------- debugLogs end: kubenet-127726 [took: 5.907076373s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-127726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-127726
--- SKIP: TestNetworkPlugins/group/kubenet (6.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-127726 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-127726

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-127726

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-127726

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-127726

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-127726

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-127726

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-127726

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-127726

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-127726

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-127726

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-127726

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-127726" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-127726

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-127726

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-127726

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-127726

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-127726" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-127726" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-127726

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-127726" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-127726"

                                                
                                                
----------------------- debugLogs end: cilium-127726 [took: 5.895957125s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-127726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-127726
--- SKIP: TestNetworkPlugins/group/cilium (6.15s)

                                                
                                    
Copied to clipboard